• Resolved lookfis

    (@lookfis)


    Hello, using redis cache now for some time but now realize i might need to optimize the Redis

    Is this normal latency for such DB ?

    works now with relay community

    Status: Connected Client: Relay (v0.12.1) Drop-in: Valid Disabled: No Ping: 1 Errors: [] PhpRedis: 6.3.0 Relay: 0.12.1 Predis: 2.4.0 Credis: Not loaded PHP Version: 8.4.18 Plugin Version: 2.7.0 Redis Version: 8.6.1 Multisite: No Metrics: Enabled Metrics recorded: 1501 Filesystem: Writable Global Prefix: “wp_” Blog Prefix: “wp_” Timeout: 1 Read Timeout: 1 Retry Interval: WP_REDIS_CLIENT: “relay” WP_REDIS_SCHEME: “unix” WP_REDIS_PATH: “/var/run/redis/redis-server.sock” WP_REDIS_DATABASE: 2 WP_REDIS_PREFIX: “ber_pro” WP_CACHE_KEY_SALT: “mber_salt” WP_REDIS_PLUGIN_PATH: WP_REDIS_IGNORED_GROUPS: [ “cron_events”, “counts”, “plugins”, “themes”, “theme_json”, “product_objects”, “order_objects”, “urls-to-ids-cache-key”, “WPML_ST_Package_Factory”, “wpml-all-meta-product-variation”, “WPML_Cookie”, “ls_languages”, “convert_url”, “element_translations”, “wpml_st_cache”, “wpml_term_translation”, “wpml_wp_cache__group_keys”, “site_options” ] Global Groups: [ “blog-details”, “blog-id-cache”, “blog-lookup”, “global-posts”, “networks”, “rss”, “sites”, “site-details”, “site-lookup”, “site-options”, “site-transient”, “users”, “useremail”, “userlogins”, “usermeta”, “user_meta”, “userslugs”, “redis-cache”, “blog_meta”, “image_editor”, “network-queries”, “site-queries”, “theme_files”, “translation_files”, “user-queries” ] Ignored Groups: [ “cron_events”, “counts”, “plugins”, “themes”, “theme_json”, “product_objects”, “order_objects”, “urls-to-ids-cache-key”, “WPML_ST_Package_Factory”, “wpml-all-meta-product-variation”, “WPML_Cookie”, “ls_languages”, “convert_url”, “element_translations”, “wpml_st_cache”, “wpml_term_translation”, “wpml_wp_cache__group_keys”, “site_options” ] Unflushable Groups: [] Groups Types: { “blog-details”: “global”, “blog-id-cache”: “global”, “blog-lookup”: “global”, “global-posts”: “global”, “networks”: “global”, “rss”: “global”, “sites”: “global”, “site-details”: “global”, “site-lookup”: “global”, “site-options”: “global”, “site-transient”: “global”, “users”: “global”, “useremail”: “global”, “userlogins”: “global”, “usermeta”: “global”, “user_meta”: “global”, “userslugs”: “global”, “redis-cache”: “global”, “cron_events”: “ignored”, “counts”: “ignored”, “plugins”: “ignored”, “themes”: “ignored”, “theme_json”: “ignored”, “product_objects”: “ignored”, “order_objects”: “ignored”, “urls-to-ids-cache-key”: “ignored”, “WPML_ST_Package_Factory”: “ignored”, “wpml-all-meta-product-variation”: “ignored”, “WPML_Cookie”: “ignored”, “ls_languages”: “ignored”, “convert_url”: “ignored”, “element_translations”: “ignored”, “wpml_st_cache”: “ignored”, “wpml_term_translation”: “ignored”, “wpml_wp_cache__group_keys”: “ignored”, “site_options”: “ignored”, “blog_meta”: “global”, “image_editor”: “global”, “network-queries”: “global”, “site-queries”: “global”, “theme_files”: “global”, “translation_files”: “global”, “user-queries”: “global” } Drop-ins: [ “Redis Object Cache Drop-In v2.7.0 by Till Krüss” ]

    extension = relay.so
    relay.environment = production
    relay.maxmemory_pct = 95
    relay.maxmemory=16M
    relay.eviction_policy=lru
    relay.max_endpoint_dbs = 4
    relay.serializer = igbinary
    relay.default_compress_algo = lz4
    relay.default_compress_level = 1
    relay.default_compress_threshold = 256

    maxmemory 512mb
    maxmemory-policy allkeys-lru
    io-threads 2
    io-threads-do-reads yes


    For example on homepage with query monitor :

    Database Queries

    0.0658s

    SELECT: 89
    SHOW: 2
    UPDATE: 1
    Total: 92
    Object Cache98.0% hit rate (6,603 hits, 138 misses)

    Some debug info :
    keyspace_hits:4876959
    keyspace_misses:1118020
    active_defrag_hits:0
    active_defrag_misses:0
    active_defrag_key_hits:0
    active_defrag_key_misses:0

    redis-cli –latency
    min: 0, max: 18, avg: 0.76 (2372 samples)

    Some browsing in admin got this :
    Max latency so far: 8757 microseconds.
    Max latency so far: 13003 microseconds.
    Max latency so far: 44025 microseconds.
    2338699136 total runs (avg latency: 0.0428 microseconds / 42.76 nanoseconds per run).
    Worst run took 1029612x longer than the average latency.

    db2_distrib_strings_sizes:0=17,1=293,2=499,4=4489,8=1010,16=4003,32=2828,64=3156,128=2281,256=1194,512=1703,1K=490,2K=1079,4K=152,8K=37,16K=32,32K=21,64K=15,128K=5,256K=4
    db2_distrib_zsets_items:1K=1



    Commandstats

    cmdstat_zadd:calls=54944,usec=1613747,usec_per_call=29.37,rejected_calls=0,failed_calls=0
    cmdstat_info:calls=59917,usec=6970627,usec_per_call=116.34,rejected_calls=0,failed_calls=0
    cmdstat_del:calls=195922,usec=1696816,usec_per_call=8.66,rejected_calls=0,failed_calls=0
    cmdstat_echo:calls=3,usec=3,usec_per_call=1.00,rejected_calls=0,failed_calls=0
    cmdstat_get:calls=4318902,usec=26643328,usec_per_call=6.17,rejected_calls=0,failed_calls=0
    cmdstat_mget:calls=676896,usec=4646145,usec_per_call=6.86,rejected_calls=0,failed_calls=0
    cmdstat_dbsize:calls=3,usec=6,usec_per_call=2.00,rejected_calls=0,failed_calls=0
    cmdstat_set:calls=1538395,usec=13752341,usec_per_call=8.94,rejected_calls=0,failed_calls=0
    cmdstat_client|setinfo:calls=24916,usec=19598,usec_per_call=0.79,rejected_calls=0,failed_calls=0
    cmdstat_client|tracking:calls=10806,usec=31894,usec_per_call=2.95,rejected_calls=0,failed_calls=0
    cmdstat_scan:calls=618,usec=4749,usec_per_call=7.68,rejected_calls=0,failed_calls=0
    cmdstat_hello:calls=12458,usec=282848,usec_per_call=22.70,rejected_calls=0,failed_calls=0
    cmdstat_select:calls=110148,usec=302685,usec_per_call=2.75,rejected_calls=0,failed_calls=0
    cmdstat_readonly:calls=1,usec=5,usec_per_call=5.00,rejected_calls=0,failed_calls=1
    cmdstat_incr:calls=6,usec=94,usec_per_call=15.67,rejected_calls=0,failed_calls=0
    cmdstat_zcard:calls=1,usec=1,usec_per_call=1.00,rejected_calls=0,failed_calls=0
    cmdstat_decr:calls=4,usec=89,usec_per_call=22.25,rejected_calls=0,failed_calls=0
    cmdstat_flushdb:calls=3,usec=57028,usec_per_call=19009.33,rejected_calls=0,failed_calls=0
    cmdstat_zremrangebyscore:calls=100,usec=43147,usec_per_call=431.47,rejected_calls=0,failed_calls=0
    cmdstat_zrangebyscore:calls=44,usec=12830,usec_per_call=291.59,rejected_calls=0,failed_calls=0
    cmdstat_type:calls=21,usec=4,usec_per_call=0.19,rejected_calls=0,failed_calls=0
    cmdstat_config|resetstat:calls=1,usec=164,usec_per_call=164.00,rejected_calls=0,failed_calls=0
    cmdstat_slowlog|get:calls=1,usec=177,usec_per_call=177.00,rejected_calls=0,failed_calls=0
    cmdstat_zcount:calls=18,usec=253,usec_per_call=14.06,rejected_calls=0,failed_calls=0
    cmdstat_strlen:calls=103,usec=926,usec_per_call=8.99,rejected_calls=0,failed_calls=0
    cmdstat_ping:calls=57392,usec=48435,usec_per_call=0.84,rejected_calls=0,failed_calls=0
    cmdstat_setex:calls=44394,usec=1204664,usec_per_call=27.14,rejected_calls=0,failed_calls=0

    4019 term-queries
    2257 post-queries
    1577 posts
    1540 post_meta
    907 transient
    765 translation_priority_relationships
    640 products
    625 WPML_404_Guess
    572 product_shipping_class_relationships
    566 product_visibility_relationships
    565 pos_product_visibility_relationships
    543 WCML_Product_Image_Filter
    413 post_tag_relationships
    413 category_relationships
    390 product_tag_relationships
    388 product_cat_relationships
    388 pa_size_relationships
    388 pa_color_relationships
    388 pa_beads_relationships
    384 product_type_relationships
    384 product_brand_relationships
    383 WPML_Name_Query_Filter_Translated
    326 WCML_Product_Gallery_Filter
    222 terms
    189 term_meta
    185 original_element
    169 WPML_TM_ICL_Translations--translations
    135 options
    99 WPML_Page_Name_Query_Filter
    93 orders
Viewing 2 replies - 1 through 2 (of 2 total)
  • Thread Starter lookfis

    (@lookfis)

    and here is from wp dashboard with peaks to 500 ms

    Plugin Author Till Krüss

    (@tillkruess)

    The Time is not the latency, but the time spend talking to Redis per HTTP request. Looks fine tbh. Spikes can happen, looks like it’s correlating with an increase in calls.

Viewing 2 replies - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.