<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Nemanja Djuric]]></title><description><![CDATA[DevOps Engineer / System Administrator / Hosting consultant]]></description><link>https://nemanja.io/</link><generator>Ghost 0.7</generator><lastBuildDate>Fri, 10 Apr 2026 19:47:20 GMT</lastBuildDate><atom:link href="https://nemanja.io/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[How to Configure Varnish Soft Purge for Magento 2 (Ubuntu + xkey)]]></title><description><![CDATA[<h1 id="enableandconfiguresoftpurgeinvarnishformagento2ubuntuxkey">Enable and Configure Soft Purge in Varnish for Magento 2 (Ubuntu + xkey)</h1>

<p>When you run Magento 2 behind Varnish on a busy store, cache invalidation can easily become a performance problem.</p>

<p>Traditionally, when a Varnish object (page) gets purged, the <strong>next user</strong> who hits that page pays the price: Varnish</p>]]></description><link>https://nemanja.io/how-to-configure-varnish-soft-purge-for-magento-2-ubuntu-xkey/</link><guid isPermaLink="false">7b745249-4d0a-4e51-ab94-0f734583d119</guid><category><![CDATA[varnish]]></category><category><![CDATA[Performance]]></category><category><![CDATA[DevOps]]></category><category><![CDATA[magento 2 varnish]]></category><category><![CDATA[magento2]]></category><category><![CDATA[cache]]></category><category><![CDATA[softpurge]]></category><category><![CDATA[magento 2 softpurge]]></category><category><![CDATA[varnish softpurge]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Fri, 06 Mar 2026 10:02:25 GMT</pubDate><media:content url="http://nemanja.io/content/images/2026/03/soft-purge.png" medium="image"/><content:encoded><![CDATA[<h1 id="enableandconfiguresoftpurgeinvarnishformagento2ubuntuxkey">Enable and Configure Soft Purge in Varnish for Magento 2 (Ubuntu + xkey)</h1>

<img src="http://nemanja.io/content/images/2026/03/soft-purge.png" alt="How to Configure Varnish Soft Purge for Magento 2 (Ubuntu + xkey)"><p>When you run Magento 2 behind Varnish on a busy store, cache invalidation can easily become a performance problem.</p>

<p>Traditionally, when a Varnish object (page) gets purged, the <strong>next user</strong> who hits that page pays the price: Varnish has to fetch from the backend, generate a new response, and only then cache it.</p>

<p>On low-traffic sites this isn’t a huge issue, but on high-traffic stores it becomes painful very quickly:</p>

<ul>
<li>The first user after a purge gets a slow response</li>
<li>If 10 users hit the same page while the cache is rebuilding, you get 10 slow responses</li>
<li>If you purge often (e.g. frequent product updates), the site <em>feels</em> slow even though Varnish is technically enabled</li>
</ul>

<p>The solution: <strong>Soft Purge</strong> using <code>vmod_xkey</code>.</p>

<p>Instead of immediately deleting objects from the cache, Varnish can:</p>

<ul>
<li>mark objects as <strong>stale</strong>,</li>
<li>continue serving them to users,</li>
<li>refresh a new version from the backend in the background.</li>
</ul>

<p>Result: users still see a cached page, while Varnish quietly builds the fresh version.</p>

<p>In this guide we’ll configure <strong>Soft Purge for Magento 2 on Ubuntu with Varnish + xkey</strong>.</p>

<p>We’ll cover:</p>

<ol>
<li>Installing <code>varnish-modules</code> (xkey) on Ubuntu  </li>
<li>Enabling <code>import xkey;</code> in your VCL  </li>
<li>Replacing the default Magento BAN logic with <strong>xkey soft purge</strong>  </li>
<li>Setting <code>grace</code> so Varnish can safely serve stale content  </li>
<li>Testing soft purge with <code>curl</code></li>
</ol>

<p>Examples assume <strong>Magento 2 + Varnish on Ubuntu</strong>.</p>

<hr>

<h2 id="1installvarnishmodulesxkeyonubuntu">1. Install <code>varnish-modules</code> (xkey) on Ubuntu</h2>

<p>On Ubuntu, the <code>vmod_xkey</code> module is provided by the <code>varnish-modules</code> package.</p>

<pre><code class="language-bash">sudo apt update  
sudo apt install varnish-modules  
</code></pre>

<p>Check that the xkey vmod is available (path may vary by distro and Varnish version):</p>

<pre><code class="language-bash">ls -1 /usr/lib/varnish/vmods  
# You should see something like: libvmod_xkey.so
</code></pre>

<blockquote>
  <p>Note: if you’re using a custom Docker image or a containerized Varnish setup, <code>vmod_xkey</code> must be built into that image. In some managed environments it’s already included by default.</p>
</blockquote>

<hr>

<h2 id="2enablexkeyinyourvcl">2. Enable <code>xkey</code> in your VCL</h2>

<p>In your main VCL file (often <code>/etc/varnish/default.vcl</code>, or an entry-point VCL that <code>include</code>s the Magento-generated file), add this at the top:</p>

<pre><code class="language-vcl">import xkey;  
</code></pre>

<p>Example:</p>

<pre><code class="language-vcl">vcl 4.1;

import std;  
import xkey;

backend default {  
    .host = "127.0.0.1";
    .port = "8080";
}
</code></pre>

<p>If you’re using the Magento-generated VCL (<code>magento2.vcl</code>), make sure <code>import xkey;</code> is added to the <strong>actual VCL file</strong> loaded by Varnish, not just some unused include.</p>

<hr>

<h2 id="3replacemagentobanlogicwithxkeysoftpurge">3. Replace Magento BAN logic with xkey Soft Purge</h2>

<p>Magento 2 invalidates cache via HTTP <strong>BAN</strong> (or PURGE) requests that carry the header <code>X-Magento-Tags-Pattern</code>. In a typical VCL you’ll see something like:</p>

<pre><code class="language-vcl">if (req.method == "BAN") {  
    if (req.http.X-Magento-Tags-Pattern) {
        ban("obj.http.X-Magento-Tags ~ " + req.http.X-Magento-Tags-Pattern);
    }
    return (synth(200, "Banned"));
}
</code></pre>

<p>With soft purge we keep the idea of <strong>tag-based invalidation</strong>, but instead of hard-removing objects, we mark them as stale using <code>xkey.softpurge()</code>.</p>

<p>In <code>sub vcl_recv</code>, in the PURGE/BAN section, update it to something like this (adapt as needed to match your existing structure):</p>

<pre><code class="language-vcl">sub vcl_recv {  
    # ... your existing logic (health checks, static assets, etc.)

    if (req.method == "PURGE") {
        # Full Page Cache flush – still a hard ban
        if (req.http.X-Magento-Tags-Pattern == ".*") {
            ban("obj.http.X-Magento-Tags ~ " + req.http.X-Magento-Tags-Pattern);
        }
        # Soft purge for specific tags
        elseif (req.http.X-Magento-Tags-Pattern) {
            # Example: "((^|,)cat_c(,|$))|((^|,)cat_p(,|$))" → "cat_c cat_p"
            set req.http.X-Magento-Tags-Pattern =
                regsuball(req.http.X-Magento-Tags-Pattern, "[^a-zA-Z0-9_-]+", " ");

            # Trim spaces
            set req.http.X-Magento-Tags-Pattern =
                regsuball(req.http.X-Magento-Tags-Pattern, "(^\\s*)|(\\s*$)", "");

            # Soft purge via xkey
            set req.http.n-gone = xkey.softpurge(req.http.X-Magento-Tags-Pattern);

            return (synth(200, "Invalidated " + req.http.n-gone + " objects"));
        }

        return (synth(200, "Purged"));
    }

    # ... rest of vcl_recv
}
</code></pre>

<p>What this does:</p>

<ul>
<li>Takes the original <code>X-Magento-Tags-Pattern</code> (which is usually a regex) and turns it into a <strong>space-separated list of tags</strong> (<code>cat_c cat_p</code>, <code>cat_p_123</code>, etc.). That’s the format <code>xkey</code> expects.</li>
<li>Calls <code>xkey.softpurge()</code> to mark all matching objects as <strong>stale</strong> instead of fully purged.</li>
<li>Returns a synthetic 200 response like <code>Invalidated 5 objects</code> – very handy for debugging with <code>curl</code>.</li>
</ul>

<p>For a <strong>full flush</strong> (<code>X-Magento-Tags-Pattern: .*</code>) we still use a classic <code>ban()</code>. In most Magento use cases, that’s exactly what you want when doing a global FPC flush.</p>

<hr>

<h2 id="4configuregraceandxkeytagsinvcl_backend_response">4. Configure <code>grace</code> and xkey tags in <code>vcl_backend_response</code></h2>

<p>Soft purge is only useful if Varnish is allowed to serve <strong>stale</strong> content while it fetches a fresh version in the background. We control that with <code>beresp.grace</code>.</p>

<p>In <code>sub vcl_backend_response</code>, add or update something like this:</p>

<pre><code class="language-vcl">sub vcl_backend_response {  
    # Time window during which stale content can be served
    set beresp.grace = 3h;

    # Use xkey based on Magento tags
    if (beresp.http.X-Magento-Tags) {
        # Expose grace for debugging
        set beresp.http.Grace = beresp.grace;

        # Turn comma-separated X-Magento-Tags into a space-separated list for xkey
        set beresp.http.xkey = regsuball(beresp.http.X-Magento-Tags, ",", " ");

        # Optionally reset X-Magento-Tags to a generic value
        # so you don't leak massive tag lists further downstream
        set beresp.http.X-Magento-Tags = "fpc";
    }

    # ... rest of vcl_backend_response
}
</code></pre>

<p>Key points:</p>

<ul>
<li><code>beresp.grace = 3h;</code> tells Varnish: even after TTL expires, you may serve this object as <strong>stale</strong> for up to 3 hours while you fetch a fresh version.</li>
<li><code>beresp.http.xkey</code> is populated from Magento’s <code>X-Magento-Tags</code> header. That’s what <code>xkey.softpurge()</code> uses to find and mark the right objects as stale.</li>
<li>Be careful not to <strong>duplicate</strong> <code>beresp.grace</code> if your Magento-generated VCL already sets it. Adjust the existing line instead of adding a second one.</li>
</ul>

<hr>

<h2 id="5reloadvarnishwiththenewvcl">5. Reload Varnish with the new VCL</h2>

<p>After editing your VCL:</p>

<pre><code class="language-bash">sudo systemctl reload varnish  
# or, if reload is not configured:
sudo systemctl restart varnish  
</code></pre>

<p>Confirm that the active VCL is loaded:</p>

<pre><code class="language-bash">sudo varnishadm vcl.list  
</code></pre>

<hr>

<h2 id="6testingsoftpurge">6. Testing Soft Purge</h2>

<h3 id="61checktagsandxkeyheaders">6.1. Check tags and xkey headers</h3>

<p>First, make a normal GET request to a Magento page (for example, a product page) and inspect the headers:</p>

<pre><code class="language-bash">curl -I https://example.com/some-product.html  
</code></pre>

<p>You should see something along the lines of:</p>

<ul>
<li><code>X-Magento-Tags: cat_p_169745,cat_c_23,...</code></li>
<li><code>xkey: cat_p_169745 cat_c_23 ...</code></li>
<li><code>Grace: 3h</code></li>
</ul>

<p>Header names can vary slightly depending on your VCL, but the important part is that <code>xkey</code> is populated with Magento’s tag list and <code>Grace</code> is set.</p>

<h3 id="62softpurgingaspecifictag">6.2. Soft purging a specific tag</h3>

<p>Now trigger a soft purge via PURGE + <code>X-Magento-Tags-Pattern</code>:</p>

<pre><code class="language-bash">curl -k -X PURGE \  
  -H 'X-Magento-Tags-Pattern: cat_p_169745' \
  http://varnish.internal:6081/some-product-url
</code></pre>

<p>If everything is wired correctly, you’ll get a synthetic response like:</p>

<pre><code class="language-html">&lt;!DOCTYPE html&gt;  
&lt;html&gt;  
&lt;head&gt;  
&lt;title&gt;200 Invalidated 1 objects&lt;/title&gt;  
&lt;/head&gt;  
&lt;body&gt;  
&lt;h1&gt;Error 200 Invalidated 1 objects&lt;/h1&gt;  
&lt;p&gt;Invalidated 1 objects&lt;/p&gt;  
&lt;h3&gt;Guru Meditation:&lt;/h3&gt;  
&lt;p&gt;XID: 65566&lt;/p&gt;  
&lt;hr&gt;  
&lt;p&gt;Varnish cache server&lt;/p&gt;  
&lt;/body&gt;  
&lt;/html&gt;  
</code></pre>

<p>Important details:</p>

<ul>
<li>HTTP status is <strong>200</strong> (this is not a real error, just Varnish’s synthetic page style).</li>
<li>The text says <code>Invalidated 1 objects</code> → <code>xkey.softpurge()</code> has marked 1 object as stale.</li>
</ul>

<h3 id="63observingbehaviorbeforeaftersoftpurge">6.3. Observing behavior before/after soft purge</h3>

<p>Typical flow:</p>

<ol>
<li>Request <strong>before</strong> soft purge → <code>HIT</code>, fast response.  </li>
<li>After soft purge → first client still gets a <strong>fast response</strong> (<code>HIT</code> with stale content), while Varnish refreshes the backend in the background.  </li>
<li>Subsequent requests → now get the fresh version from the updated backend, still as cache <strong>HIT</strong>.</li>
</ol>

<p>If you see <code>MISS</code> and slow responses instead, double-check:</p>

<ul>
<li>Is <code>beresp.grace</code> actually set (and not overridden later)?</li>
<li>Does some other part of your VCL do <code>return (pass);</code> for that URL?  </li>
<li>Is some custom Magento/Varnish module doing extra purges or cache-bypass logic?</li>
</ul>

<hr>

<h2 id="7notesforproductionenvironments">7. Notes for Production Environments</h2>

<p>A few practical tips before rolling this into production:</p>

<ul>
<li><p>If you’re using a <strong>Magento Varnish extension</strong> (e.g. Jetrails Varnish extension or similar):</p>

<ul><li>Always upgrade it to the latest version first</li>
<li>Regenerate the VCL from Magento</li>
<li>Then apply the soft purge modifications on top of that generated VCL</li></ul></li>
<li><p>If you run a <strong>multi-node Varnish cluster</strong>:</p>

<ul><li>Make sure purge/softpurge is executed on <strong>all nodes</strong> (via shared admin endpoint, orchestration, or <code>varnishadm</code> over SSH).</li></ul></li>
<li><p>Be reasonable with <code>grace</code> values:</p>

<ul><li><code>3h</code> is often a sweet spot for most e‑commerce sites</li>
<li>For extremely dynamic content, you might want shorter grace (e.g. 30–60 minutes)</li>
<li>You can also tune grace by path (e.g. longer grace for category pages, shorter for carts/checkout which should normally bypass Varnish anyway)</li></ul></li>
</ul>

<hr>

<h2 id="8references">8. References</h2>

<ul>
<li><p>Varnish "Grace" documentation: <br>
<a href="https://varnish-cache.org/docs/trunk/users-guide/vcl-grace.html">https://varnish-cache.org/docs/trunk/users-guide/vcl-grace.html</a></p></li>
<li><p><code>vmod_xkey</code> on GitHub: <br>
<a href="https://github.com/varnish/varnish-modules/blob/master/src/vmod_xkey.vcc">https://github.com/varnish/varnish-modules/blob/master/src/vmod_xkey.vcc</a></p></li>
</ul>

<hr>

<p>If you’re debugging a high-traffic Magento 2 store and notice that Varnish is enabled but your users still see slow page loads after product updates, switching from <strong>hard purges</strong> to <strong>soft purges with xkey</strong> is one of the highest ROI changes you can make.</p>]]></content:encoded></item><item><title><![CDATA[Debugging "Invalid Argument" Error in Magento 2 Google ReCaptcha]]></title><description><![CDATA[<p>So today I spent some quality time debugging a frustrating ReCaptcha issue on a Magento 2 store. If you've ever seen that dreaded "Invalid argument" message inside the ReCaptcha widget, this post is for you.</p>

<p><strong>The Problem</strong> <br></p>

<p>A customer contacted me about an issue on their production Magento 2.4.</p>]]></description><link>https://nemanja.io/debugging-invalid-argument-error-in-magento-2-google-recaptcha/</link><guid isPermaLink="false">d2427e17-4c95-4b19-a7db-1824d9d9e25e</guid><category><![CDATA[magento 2 recaptcha]]></category><category><![CDATA[captcha]]></category><category><![CDATA[recaptcha]]></category><category><![CDATA[google recaptcha]]></category><category><![CDATA[google recaptcha v2]]></category><category><![CDATA[google recaptcha v3]]></category><category><![CDATA[magento 2.4.8 recaptcha]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Sun, 18 Jan 2026 09:55:39 GMT</pubDate><media:content url="http://nemanja.io/content/images/2026/01/50_-_magento_2_store_credit_and_refund_extension_1_1.png" medium="image"/><content:encoded><![CDATA[<img src="http://nemanja.io/content/images/2026/01/50_-_magento_2_store_credit_and_refund_extension_1_1.png" alt="Debugging "Invalid Argument" Error in Magento 2 Google ReCaptcha"><p>So today I spent some quality time debugging a frustrating ReCaptcha issue on a Magento 2 store. If you've ever seen that dreaded "Invalid argument" message inside the ReCaptcha widget, this post is for you.</p>

<p><strong>The Problem</strong> <br></p>

<p>A customer contacted me about an issue on their production Magento 2.4.8 store - the login page was showing a strange error in the ReCaptcha widget. Instead of the normal "protected by reCAPTCHA" badge, there was a red "Invalid argument" message.</p>

<p>To debug this safely without touching production, I cloned the entire store (database and files) to our staging environment using Magento 2 Gitpod on ona.com (<a href="https://github.com/nemke82/magento2gitpod">https://github.com/nemke82/magento2gitpod</a>). The issue reproduced perfectly on staging - same error, same behavior.</p>

<p>I created fresh Google ReCaptcha v3 keys specifically for the staging domain and configured them in Magento admin. But the error persisted. <br>
The weird part? I tested the same ReCaptcha keys on a fresh Magento 2.4.8 installation and they worked perfectly. So the issue was specific to this codebase, not the keys or domain configuration.</p>

<p><img src="https://nemanja.io/content/images/2026/01/2026-01-18_10-49.png" alt="Debugging "Invalid Argument" Error in Magento 2 Google ReCaptcha"></p>

<p>The ReCaptcha widget was there, but instead of the nice "protected by reCAPTCHA" badge, I got this ugly red error message. <br>
The weird part? I tested the same ReCaptcha keys on a fresh Magento 2.4.8 installation and they worked perfectly. So the issue was specific to this codebase.</p>

<p><strong>The Investigation</strong> <br></p>

<p>First stop - Magento logs. I checked var/log/exception.log and var/log/system.log. Found some interesting errors about a module called Mageplaza_GoogleRecaptcha:</p>

<pre><code>Class "Mageplaza\GoogleRecaptcha\Model\System\Config\Source\Frontend\Forms" not found  
Class "Mageplaza\GoogleRecaptcha\Observer\Captcha" does not exist  
</code></pre>

<p>Hmm. The module was being called but didn't exist. 🤔 <br>
I assume customer had module, removed it somehow but always that leftovers... meh Magento...</p>

<p><strong>Root Cause #1: Orphaned Module Configuration</strong> <br></p>

<p>Turns out, the Mageplaza GoogleRecaptcha extension was previously installed on this store but later removed. The problem? The configuration data was still sitting in the database:</p>

<pre><code>SELECT * FROM core_config_data WHERE path LIKE 'googlerecaptcha%';  
</code></pre>

<pre><code>config_id  scope     scope_id  path                                      value                                         updated_at  
1379       default   0         googlerecaptcha/general/enabled           1                                             2026-01-16 12:24:41  
1380       default   0         googlerecaptcha/general/language          sr                                            2026-01-16 12:24:41  
1381       default   0         googlerecaptcha/general/invisible/api_key NULL                                          2026-01-16 12:27:21  
1382       default   0         googlerecaptcha/general/invisible/api_secret NULL                                       2026-01-16 12:27:21  
1383       default   0         googlerecaptcha/general/visible/api_key   0:3:SijGQlxN...encrypted...                   2026-01-16 12:27:21  
1384       default   0         googlerecaptcha/general/visible/api_secret 0:3:SNydl+LZ...encrypted...                  2026-01-16 12:27:21  
1385       default   0         googlerecaptcha/backend/enabled           0                                             2024-10-31 16:24:06  
1386       default   0         googlerecaptcha/frontend/enabled          1                                             2026-01-16 12:24:41  
1387       default   0         googlerecaptcha/frontend/type             visible                                       2026-01-16 12:24:41  
1388       default   0         googlerecaptcha/frontend/forms            body.customer-account-login #login-form...    2026-01-16 12:24:41  
1389       default   0         googlerecaptcha/frontend/position         0                                             2026-01-14 21:23:20  
1390       default   0         googlerecaptcha/frontend/theme            dark                                          2026-01-16 12:24:41  
2048       default   0         googlerecaptcha/frontend/size             compact                                       2026-01-16 12:24:41  
</code></pre>

<p>The config showed googlerecaptcha/frontend/enabled = 1 - so Magento was trying to use a module that no longer existed!</p>

<p><strong>Root Cause #2: Invalid Language Code</strong></p>

<p>While digging through the config, I noticed something else odd. The ReCaptcha language setting was set to "0":</p>

<pre><code>recaptcha_frontend/type_recaptcha_v3/lang = 0  
</code></pre>

<p>** Now, for english default this is fine, but this customer switched frontend to custom Language (Serbian)</p>

<p>That's not a valid language code! It should be something like "en", "sr", or empty string for auto-detect. This invalid value was being passed to Google's API as hl=0, causing the "Invalid argument" error.</p>

<pre><code>path                                        value  
recaptcha_frontend/type_recaptcha/lang      0  
recaptcha_frontend/type_invisible/lang      0  
recaptcha_frontend/type_recaptcha_v3/lang   0  
recaptcha_backend/type_recaptcha/lang       0  
recaptcha_backend/type_recaptcha_v3/lang    0  
</code></pre>

<p><strong>The Fix</strong> <br></p>

<p>Two simple database updates fixed everything: <br>
1. Disable the orphaned Mageplaza config:  </p>

<pre><code>UPDATE core_config_data SET value = '0' WHERE path = 'googlerecaptcha/frontend/enabled';  
UPDATE core_config_data SET value = '0' WHERE path = 'googlerecaptcha/general/enabled';  
</code></pre>

<ol>
<li>Fix the invalid language codes:  </li>
</ol>

<pre><code>UPDATE core_config_data SET value = '' WHERE path LIKE '%recaptcha%lang%' AND value = '0';  
</code></pre>

<ol>
<li>Clear the cache (or roll over deployment entirely):  </li>
</ol>

<pre><code>php bin/magento cache:flush  
</code></pre>

<p>And just like that... <br>
<img src="https://nemanja.io/content/images/2026/01/2026-01-18_10-47.png" alt="Debugging "Invalid Argument" Error in Magento 2 Google ReCaptcha"></p>

<p>The beautiful blue "protected by reCAPTCHA" badge appeared! 🎉</p>

<p><strong>Lessons Learned</strong></p>

<p>Always clean up after removing extensions - Don't just delete the module files. Remove the config data from core<em>config</em>data table too. <br>
Check for module conflicts - When you have multiple ReCaptcha solutions (Magento native + third-party), they can conflict even if one is "disabled". <br>
Validate your config values - A simple "0" instead of empty string or proper value can break things in unexpected ways. <br>
Fresh installs are your friend - Testing on a clean Magento installation helped confirm the issue was codebase-specific, not a key/domain problem.</p>

<p><strong>Quick Debug Checklist for ReCaptcha Issues</strong> <br>
If you're seeing "Invalid argument" in your Magento ReCaptcha:</p>

<p>[ ] Check core<em>config</em>data for orphaned third-party ReCaptcha configs<br>
[ ] Verify language codes aren't set to invalid values like "0"<br>
[ ] Ensure only ONE ReCaptcha solution is active <br>
[ ] Verify your keys match the ReCaptcha type (v2 vs v3) <br>
[ ] Confirm your domain is registered in Google ReCaptcha  console <br></p>

<p>Hope this saves someone a few hours of debugging!</p>]]></content:encoded></item><item><title><![CDATA[Meet VeloServe: Speed Redefined]]></title><description><![CDATA[<h1 id="veloserveahighperformancewebserverwritteninrust">VeloServe: A High-Performance Web Server written in Rust</h1>

<p><strong>TL;DR:</strong> I spent a day building VeloServe, a high-performance web server with embedded PHP support, using Cursor Pro and Claude Opus 4.5. It's now open source and you can try it in one command.</p>

<p>Visit: <a href="https://www.veloserve.io">https://www.veloserve.io</a> for</p>]]></description><link>https://nemanja.io/building-veloserve-rust-web-server/</link><guid isPermaLink="false">3f9a37ec-8b71-4779-ae22-1e419ab935f1</guid><category><![CDATA[nginx]]></category><category><![CDATA[apache]]></category><category><![CDATA[Rust]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[DevOps]]></category><category><![CDATA[Performance]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[AI]]></category><category><![CDATA[AI Coding]]></category><category><![CDATA[WordPress]]></category><category><![CDATA[SAPI]]></category><category><![CDATA[PHP]]></category><category><![CDATA[VeloServer]]></category><category><![CDATA[Velo]]></category><category><![CDATA[php-fpm]]></category><category><![CDATA[litespeed]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Sat, 06 Dec 2025 18:34:03 GMT</pubDate><media:content url="http://nemanja.io/content/images/2025/12/2025-12-06_19-32-4.png" medium="image"/><content:encoded><![CDATA[<h1 id="veloserveahighperformancewebserverwritteninrust">VeloServe: A High-Performance Web Server written in Rust</h1>

<img src="http://nemanja.io/content/images/2025/12/2025-12-06_19-32-4.png" alt="Meet VeloServe: Speed Redefined"><p><strong>TL;DR:</strong> I spent a day building VeloServe, a high-performance web server with embedded PHP support, using Cursor Pro and Claude Opus 4.5. It's now open source and you can try it in one command.</p>

<p>Visit: <a href="https://www.veloserve.io">https://www.veloserve.io</a> for more details.</p>

<hr>

<h2 id="thebeginningofawildidea">The Beginning of a Wild Idea</h2>

<p>I've been frustrated with the traditional web server stack for years. Nginx is great, Apache is reliable, but setting up PHP-FPM, configuring sockets, tuning workers... it's a lot. What if there was a single binary that just worked and at same time you could use it to scale with any Cloud provider or data center or at local dev purpose.</p>

<p>That's when I decided to build <strong>VeloServe</strong> — a modern web server written in Rust with PHP embedded directly into it. No PHP-FPM. No separate processes. Just speed.</p>

<hr>

<h2 id="thesetupcursorproclaudeopus45onacom">The Setup: Cursor Pro + Claude Opus 4.5 + Ona.com</h2>

<p>Here's my development setup that made this possible:</p>

<h3 id="cursorprowithremotessh">Cursor Pro with Remote SSH</h3>

<p>I use Cursor as my primary IDE. It's VS Code on steroids with AI built-in. The game-changer? Remote SSH connections. I connected Cursor to an <strong>Ona.com</strong> workspace (formerly Gitpod), which gave me a fully configured development environment in the cloud.</p>

<p><strong>Why Ona.com?</strong> It spins up a complete Linux environment with Docker, all the tools I need, and most importantly — I can close my laptop and pick up exactly where I left off from any device.</p>

<h3 id="claudeopus45asmypairprogrammer">Claude Opus 4.5 as My Pair Programmer</h3>

<p>The real magic happened with Claude Opus 4.5 through Cursor's AI features. But here's the thing — I didn't just blindly accept AI suggestions.</p>

<p>Every piece of Rust code, I verified against the official Rust documentation. Every Tokio async pattern, I cross-referenced with the Tokio docs. Every Hyper HTTP handling, I checked against Hyper's examples.</p>

<p>This is what I call <strong>Vibe Coding</strong> — you work with the AI, not for it. The AI suggests, you verify, you refine, you ship.</p>

<hr>

<h2 id=""> </h2>

<p><br> <br>
VeloServe is a web server that:</p>

<ul>
<li><strong>Runs PHP inside itself</strong> — no external PHP-FPM process</li>
<li><strong>Written in Rust</strong> — memory safe, blazing fast</li>
<li><strong>Supports two modes:</strong>
<ul><li><strong>CGI Mode</strong> — uses php-cgi, works everywhere</li>
<li><strong>SAPI Mode</strong> — PHP embedded via FFI, 10-100x faster</li></ul></li>
<li><strong>WordPress/Magento ready</strong> — intelligent caching, clean URLs</li>
<li><strong>Single binary</strong> — just download and run</li>
</ul>

<h2 id="thenumbers">The Numbers</h2>

<p>Performance Comparison: <br>
🚀 VeloServe (SAPI Mode)</p>

<p>~10,000 requests/sec
~1ms latency
5x faster than traditional setups</p>

<p>⚡ Nginx + PHP-FPM (traditional)</p>

<p>~2,000 requests/sec
~10ms latency
Industry standard baseline</p>

<p>🐌 VeloServe (CGI Mode)</p>

<p>~500 requests/sec
~50ms latency
Compatibility mode, works everywhere</p>

<hr>

<h2 id="thedevelopmentjourney">The Development Journey</h2>

<p><br></p>

<h3 id="day1corehttpserver">Day 1: Core HTTP Server</h3>

<p>We started with the basics — a Tokio-based async HTTP server using Hyper. The initial commit was just serving static files with proper MIME types, ETag headers, and conditional requests.</p>

<p>I kept the Hyper migration guide open in another tab the entire time. Hyper 1.0 changed a lot, and Claude's training data didn't always have the latest patterns. <strong>Always verify.</strong></p>

<h3 id="day2phpintegration">Day 2: PHP Integration</h3>

<p>This is where it got interesting. We implemented PHP execution in two ways:</p>

<p><strong>CGI Mode</strong> — spawn <code>php-cgi</code> for each request, pass environment variables, pipe POST data through stdin</p>

<p><strong>SAPI Mode</strong> — use Rust FFI to link against <code>libphp.so</code> and execute PHP in-process</p>

<p>The FFI work was tricky. We had to:</p>

<ul>
<li>Create a <code>build.rs</code> that detects PHP installation via <code>php-config</code></li>
<li>Write FFI bindings for <code>php_embed_init()</code>, <code>php_execute_script()</code>, etc.</li>
<li>Handle the PHP lifecycle correctly</li>
</ul>

<p>I spent hours in the PHP Internals Book and the Rust FFI guide making sure we weren't going to cause memory leaks or segfaults.</p>

<h3 id="day3wordpressdemodeployment">Day 3: WordPress Demo &amp; Deployment</h3>

<p>The ultimate test — can it run WordPress? We set up:</p>

<ul>
<li>WordPress with SQLite (no MySQL needed for demos)</li>
<li>Automatic URL detection for cloud environments</li>
<li>One-click deployment on Ona.com</li>
</ul>

<p>When I saw the WordPress installation wizard load through VeloServe with <strong>~1ms PHP execution time</strong>, I knew we had something special.</p>

<hr>

<h2 id="tryityourself">Try It Yourself</h2>

<h3 id="onelineinstall">One-Line Install</h3>

<pre><code class="language-bash">curl -sSL https://veloserve.io/install.sh | bash  
</code></pre>

<h3 id="quicktest">Quick Test</h3>

<pre><code class="language-bash">mkdir -p /tmp/mysite  
echo '&lt;?php phpinfo();' &gt; /tmp/mysite/index.php  
veloserve start --root /tmp/mysite --listen 0.0.0.0:8080  
</code></pre>

<p>Visit <code>http://localhost:8080</code> and you'll see PHP running through VeloServe.</p>

<p>Also a lot of useful CLI commands you can find on our Documentation pages or in Readme on Github repo: <br>
<a href="https://github.com/veloserve/veloserve?tab=readme-ov-file#cli-tool">https://github.com/veloserve/veloserve?tab=readme-ov-file#cli-tool</a></p>

<p>You can start ready-made WordPress demo: <br>
<a href="https://github.com/veloserve/veloserve?tab=readme-ov-file#wordpress-demo-features">https://github.com/veloserve/veloserve?tab=readme-ov-file#wordpress-demo-features</a></p>

<h3 id="tryinthecloudnoinstall">Try in the Cloud (No Install)</h3>

<p>Don't want to install anything? Try it instantly:</p>

<p><a href="https://ona.com/#https://github.com/veloserve/veloserve"><img src="https://img.shields.io/badge/Open%20in-Ona.com-ff6b35?style=for-the-badge" alt="Meet VeloServe: Speed Redefined" title=""></a></p>

<hr>

<h2 id="whatilearned">What I Learned</h2>

<h3 id="1aiisaforcemultipliernotareplacement">1. AI is a Force Multiplier, Not a Replacement</h3>

<p>Claude helped me write code 10x faster, but I still needed to understand what the code was doing. When we hit a Windows build issue with Unix-only signals, I knew immediately how to fix it with <code>#[cfg(unix)]</code> because I understood Rust's conditional compilation.</p>

<h3 id="2alwaysverifyagainstofficialdocs">2. Always Verify Against Official Docs</h3>

<p>AI models are trained on data that can be outdated. The Rust ecosystem moves fast. Always have the docs open:</p>

<ul>
<li><a href="https://docs.rs">docs.rs</a> for crate documentation</li>
<li><a href="https://doc.rust-lang.org/book/">The Rust Book</a> for language features</li>
<li>Official project documentation for frameworks</li>
</ul>

<h3 id="3clouddevelopmentenvironmentsaregamechangers">3. Cloud Development Environments Are Game-Changers</h3>

<p>Working through Cursor's Remote SSH to Ona.com meant:</p>

<ul>
<li>Consistent environment across devices</li>
<li>No "works on my machine" problems</li>
<li>Easy to share and reproduce</li>
<li>Powerful cloud hardware for compilation</li>
</ul>

<h3 id="4shipearlyiteratefast">4. Ship Early, Iterate Fast</h3>

<p>I went from zero to a working web server with WordPress support in one day. It's not perfect — the SAPI mode needs more work, we need better error handling, and there's always more optimization to do. But it works, it's open source, and the community can help improve it.</p>

<hr>

<h2 id="resources">Resources</h2>

<ul>
<li><strong>Website:</strong> <a href="https://veloserve.io">veloserve.io</a></li>
<li><strong>GitHub:</strong> <a href="https://github.com/veloserve/veloserve">github.com/veloserve/veloserve</a></li>
<li><strong>Documentation:</strong> <a href="https://github.com/veloserve/veloserve/tree/main/docs">github.com/veloserve/veloserve/tree/main/docs</a></li>
<li><strong>Configuration Reference:</strong> <a href="https://github.com/veloserve/veloserve/tree/main/docs/configuration.md">docs/configuration.md</a></li>
<li><strong>Environment Variables:</strong> <a href="https://github.com/veloserve/veloserve/tree/main/docs/environment-variables.md">docs/environment-variables.md</a></li>
</ul>

<hr>

<h2 id="thestack">The Stack</h2>

<p>For those curious about the exact setup:</p>

<ul>
<li><strong>IDE:</strong> Cursor Pro with Remote SSH</li>
<li><strong>AI:</strong> Claude Opus 4.5 (via Cursor)</li>
<li><strong>Cloud Environment:</strong> Ona.com (Gitpod successor)</li>
<li><strong>Language:</strong> Rust 1.75+</li>
<li><strong>Key Crates:</strong> Tokio, Hyper, tokio-rustls</li>
<li><strong>Website Hosting:</strong> Vercel</li>
<li><strong>Domain:</strong> veloserve.io</li>
</ul>

<hr>

<h2 id="whatsnext">What's Next</h2>

<p>VeloServe 1.0.0 is just the beginning. On the roadmap:</p>

<ul>
<li>Complete SAPI mode implementation</li>
<li>FastCGI protocol support</li>
<li>HTTP/3 (QUIC)</li>
<li>Built-in Let's Encrypt</li>
<li>Configuration hot-reload</li>
<li>Prometheus metrics</li>
</ul>

<p>Development Roadmap is here: <br>
<a href="https://github.com/veloserve/veloserve?tab=readme-ov-file#%EF%B8%8F-development-roadmap">https://github.com/veloserve/veloserve?tab=readme-ov-file#%EF%B8%8F-development-roadmap</a></p>

<p>If you're interested in contributing or just want to try it out, head to <a href="https://veloserve.io">veloserve.io</a> and give it a spin.</p>

<p>And if you build something cool with it, let me know on Twitter/X or open an issue on GitHub!</p>

<p><a href="https://github.com/veloserve/veloserve">https://github.com/veloserve/veloserve</a></p>

<p>Good luck, happy testing!</p>]]></content:encoded></item><item><title><![CDATA[How to quickly fix broken RPM packages in Fedora]]></title><description><![CDATA[<p>Hello,</p>

<p>It's been a while since my last post. I wanted to share today something really interesting that happened to me while upgrading my Fedora OS from version 40 to 42 (latest). Back few yours ago I was heavily using Desktop called Deepin (<a href="https://www.deepin.org/en/dde/">https://www.deepin.org/en/dde/</a>), and</p>]]></description><link>https://nemanja.io/how-to-quickly-fix-broken-rpm-packages-in-fedora/</link><guid isPermaLink="false">456fb212-5195-4647-b306-4ac7653828a0</guid><category><![CDATA[rpm]]></category><category><![CDATA[rpm packages]]></category><category><![CDATA[broken rpm]]></category><category><![CDATA[fix deepin editor]]></category><category><![CDATA[deepin editor]]></category><category><![CDATA[deepin-editor]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Fri, 02 May 2025 15:03:55 GMT</pubDate><media:content url="http://nemanja.io/content/images/2025/05/May-2--2025--05_02_12-PM.png" medium="image"/><content:encoded><![CDATA[<img src="http://nemanja.io/content/images/2025/05/May-2--2025--05_02_12-PM.png" alt="How to quickly fix broken RPM packages in Fedora"><p>Hello,</p>

<p>It's been a while since my last post. I wanted to share today something really interesting that happened to me while upgrading my Fedora OS from version 40 to 42 (latest). Back few yours ago I was heavily using Desktop called Deepin (<a href="https://www.deepin.org/en/dde/">https://www.deepin.org/en/dde/</a>), and before switching to Plasma KDE, I have never noticed how much you depend on tools that each desktop has. This tool is deepin-editor.</p>

<p><a href="https://github.com/linuxdeepin/deepin-editor">https://github.com/linuxdeepin/deepin-editor</a></p>

<p>Why I like it? What is differences between that tool and others? My answer is honestly I do not know, maybe I got used to it so much that it matters to me and I want to keep it (Keeper).</p>

<p>So story begins I've tried to upgrade my Fedora 40, but sadly dnf upgrade gave me error related to deepin-terminal and deepin-editor :(</p>

<pre><code>Running transaction check  
Transaction check succeeded.  
Running transaction test  
The downloaded packages were saved in cache until the next successful transaction.  
You can remove cached packages by executing 'dnf clean packages'.  
Error: Transaction test error:  
  file /usr/share/deepin-terminal/translations/deepin-terminal_az.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_bo.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_br.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_ca.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_cs.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_de.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_el.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_es.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_fi.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_fr.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_gl_ES.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_hi_IN.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_hr.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_hu.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_id.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_it.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_ko.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_ms.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_nl.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_pl.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_pt.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_pt_BR.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_ro.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_ru.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_sq.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_sr.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_tr.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_ug.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_uk.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_zh_HK.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
  file /usr/share/deepin-terminal/translations/deepin-terminal_zh_TW.qm from install of deepin-terminal-6.0.15-1.fc42.x86_64 conflicts with file from package deepin-terminal-data-6.0.14-2.fc40.noarch
</code></pre>

<p>Another one after I've removed deepin-terminal-data...</p>

<p>So I had to:  </p>

<pre><code>dnf remove deepin-terminal deepin-terminal-data deepin-editor  
</code></pre>

<p>Yay! Error is gone, but my favorite editor is gone as well... :(</p>

<p>After 15-20 minutes my fresh Fedora 42 booted, I've went again to terminal:</p>

<pre><code>dnf install deepin-editor  
</code></pre>

<p>Breakdown is:  </p>

<pre><code>Installed:      qt5-qtbase-5.15.16-2.fc42.x86_64  
Required by:    deepin-editor-6.5.4-1.fc42.x86_64 → qt5-qtbase = 5.15.15 ❌  
</code></pre>

<p>Oh boy, so qt5 went to 5.15.16-2 in Fedora 42, but my poor old deepin-editor still requires 5.15.15</p>

<p>Started to explore if there is any "fix" or "workaround" on fedora pages, but nothing reported so far. Probably use of this tool is so low that nobody cares to be honest :)</p>

<p>I've decided to go my way... as always... going to rebuild RPM, fix this and have my own personal Deepin Editor tweaked.</p>

<p>1) Get the source RPM:  </p>

<pre><code>dnf download --source deepin-editor  
</code></pre>

<p>2) Install RPM build tools:  </p>

<pre><code>sudo dnf install rpmdevtools  
rpmdev-setuptree  
</code></pre>

<p>3) Extract and edit the .spec:  </p>

<pre><code>rpm -ivh deepin-editor*.src.rpm  
cd ~/rpmbuild/SPECS  
</code></pre>

<p>Now, I now proper way would be to extract ~/rpmbuild/SOURCES tar archive, cp directory to something like .orig, edit original directory and then just create a patch using diff, for example creating a patch:  </p>

<pre><code>cd ~/rpmbuild/SOURCES  
diff -uNr deepin-editor-6.5.4.orig deepin-editor-6.5.4 &gt; fix-qstring-null.patch  
</code></pre>

<p>Since initially when you run following command without any tweak, the core problem why this all is happening is that qt5 5.15 this was used:  </p>

<pre><code>QString::null  
</code></pre>

<p>Now, it has to be changed to the following:  </p>

<pre><code>QString()  
</code></pre>

<p>That's it! Damn.... that must be an easy fix right? Well, let's continue. I've decided to just untar archive within ~/rpmbuild/SOURCES directory, make an edits... yep cowboy... and good to go...</p>

<pre><code>cd ~/rpmbuild/SOURCES  
grep -R "QString::null"  
</code></pre>

<p>You will see which files needs to be edited, I will list them here:  </p>

<pre><code>src/controls/tabbar.h (two places)  
src/common/settings.h  
tests/src/common/ut_utils.cpp  
</code></pre>

<p><strong>Open each file and change everywhere you find QString::null to Qstring()</strong></p>

<p>Navigate to the /root/rpmbuild/SOURCES/deepin-editor-6.5.4 directory and let's modify <strong>CMakeLists.txt</strong> file:</p>

<p>Find following:  </p>

<pre><code>set(CMAKE_CXX_STANDARD 11)  
</code></pre>

<p>and change it to C++17</p>

<pre><code>set(CMAKE_CXX_STANDARD 17)  
set(CMAKE_CXX_STANDARD_REQUIRED ON)  
set(CMAKE_CXX_EXTENSIONS ON)  
</code></pre>

<p>After editing the file, re-create the archive and re-run:  </p>

<pre><code>rpmbuild -ba SPECS/deepin-editor.spec  
</code></pre>

<p>You will see message that build completes, and now it is ready to install your freshly customized package:  </p>

<pre><code>rpm -ivh /root/rpmbuild/RPMS/x86_64/deepin-editor-6.5.4-1.fc42.x86_64.rpm  
</code></pre>

<p>Job done! Time to test!</p>

<p><img src="https://nemanja.io/content/images/2025/05/deepin-editor.png" alt="How to quickly fix broken RPM packages in Fedora"></p>

<p>Great work, now you have ideas how to customized ready made RPM packages. Good luck! Hope this helps.</p>]]></content:encoded></item><item><title><![CDATA[How to deploy AWS EKS Cluster with Longhorn filesystem]]></title><description><![CDATA[<p>Hello,</p>

<p>It's been a while since my last post. Today I would like to show how to setup Kubernetes cluster on AWS using their Elastic Kubernetes Service. In order to do that I have selected eksctl tool (<a href="https://eksctl.io/">https://eksctl.io/</a>).</p>

<p>You can setup managed and non-managed type of cluster. Just</p>]]></description><link>https://nemanja.io/how-to-deploy-aws-eks-cluster-with-longhorn-filesystem/</link><guid isPermaLink="false">d7eff928-0c4a-41c2-9456-73fa8547aeea</guid><category><![CDATA[eksctl]]></category><category><![CDATA[aws]]></category><category><![CDATA[aws eks]]></category><category><![CDATA[eks]]></category><category><![CDATA[elastic kubernetes service]]></category><category><![CDATA[kubernetes]]></category><category><![CDATA[kubernetes service]]></category><category><![CDATA[helm]]></category><category><![CDATA[longhorn]]></category><category><![CDATA[longhorn filesystem]]></category><category><![CDATA[longhorn ui]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Sun, 13 Oct 2024 16:40:41 GMT</pubDate><media:content url="http://nemanja.io/content/images/2024/10/eks-longhorn.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://nemanja.io/content/images/2024/10/eks-longhorn.jpg" alt="How to deploy AWS EKS Cluster with Longhorn filesystem"><p>Hello,</p>

<p>It's been a while since my last post. Today I would like to show how to setup Kubernetes cluster on AWS using their Elastic Kubernetes Service. In order to do that I have selected eksctl tool (<a href="https://eksctl.io/">https://eksctl.io/</a>).</p>

<p>You can setup managed and non-managed type of cluster. Just to mention that if you select non-Managed you are going to be able to utilize Ubuntu or any other operating system, while AWS EKS Managed services are explicitly tied to AMIs based on AmazonLinux2023 OS. </p>

<p><img src="https://nemanja.io/content/images/2024/10/oops-sorry-do-not-know-600nw-1037992885.jpg" alt="How to deploy AWS EKS Cluster with Longhorn filesystem"></p>

<p><strong>What is Longhorn filesystem?</strong>
<br> <br>
Longhorn is an open-source, lightweight, and highly available distributed block storage solution for Kubernetes. It provides persistent storage using containers and microservices to manage storage volumes across Kubernetes clusters. Longhorn ensures data replication across multiple nodes, making it resilient to hardware failures and node outages. It features incremental snapshots, backups, and easy volume recovery, making it ideal for applications requiring reliable and scalable storage in a Kubernetes environment.</p>

<p><strong>Prerequisites:</strong>
<br> <br>
- eksctl tool installed (<a href="https://eksctl.io">https://eksctl.io</a>) <br>
- aws cli tool (<a href="https://aws.amazon.com/cli/">https://aws.amazon.com/cli/</a>) <br>
- helm tool (<a href="https://helm.sh/docs/intro/install/">https://helm.sh/docs/intro/install/</a>) <br>
- kubectl tool (<a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/">https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/</a>)<br>
- longhorn cli tool (<a href="https://longhorn.io/docs/1.7.0/deploy/install/#using-the-longhorn-command-line-tool">https://longhorn.io/docs/1.7.0/deploy/install/#using-the-longhorn-command-line-tool</a>) <br></p>

<p>Let's get started. First we will spin up fresh AWS EKS cluster. In my example setup I did a demo cluster with 8 worker nodes, where each Area Zone had 4.</p>

<p>Example of cluster.yaml file for eksctl tool is here:  </p>

<pre><code>apiVersion: eksctl.io/v1alpha5  
kind: ClusterConfig

metadata:  
  name: sqlscale
  region: us-east-1

availabilityZones: ["us-east-1a", "us-east-1b"]

addons:  
  - name: vpc-cni
    version: latest
  - name: coredns
    version: latest
  - name: kube-proxy
    version: latest

managedNodeGroups:  
  - name: sqlng-1
    instanceType: t2.medium
    instanceName: sqlscale-worker-1
    desiredCapacity: 4
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
        - arn:aws:iam::aws:policy/AmazonSSMFullAccess
      withAddonPolicies:
        imageBuilder: true
        autoScaler: true
        externalDNS: true
        certManager: true
        appMesh: true
        appMeshPreview: true
        ebs: true
        fsx: true
        efs: true
        awsLoadBalancerController: true
        xRay: true
        cloudWatch: true
    volumeSize: 50
    volumeType: gp3
    ssh:
      allow: true # will use ~/.ssh/id_rsa.pub as the default ssh key
    privateNetworking: true
    labels: {role: worker}
    tags:
      nodegroup-role: worker
    preBootstrapCommands:
      # Install SSM Agent,NFS,Openiscsi and similar packages
      - "yum install -y amazon-ssm-agent"
      - "yum install nfs-utils -y"
      - "yum --setopt=tsflags=noscripts install iscsi-initiator-utils -y"
      - "yum install curl -y"
      - 'echo "InitiatorName=$(/sbin/iscsi-iname)" &gt; /etc/iscsi/initiatorname.iscsi'
      - "systemctl enable iscsid"
      - "systemctl start iscsid"
      - "systemctl enable amazon-ssm-agent"
      - "systemctl start amazon-ssm-agent"
      # allow docker registries to be deployed as cluster service
      - "sed '2i \"insecure-registries\": [\"172.20.0.0/16\",\"10.100.0.0/16\"],'  /etc/docker/daemon.json"
      - "systemctl restart docker"
  - name: sqlng-2
    instanceType: t2.medium
    instanceName: sqlscale-worker-2
    desiredCapacity: 4
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
        - arn:aws:iam::aws:policy/AmazonSSMFullAccess
      withAddonPolicies:
        imageBuilder: true
        autoScaler: true
        externalDNS: true
        certManager: true
        appMesh: true
        appMeshPreview: true
        ebs: true
        fsx: true
        efs: true
        awsLoadBalancerController: true
        xRay: true
        cloudWatch: true
    volumeSize: 50
    volumeType: gp3
    ssh:
      allow: true # will use ~/.ssh/id_rsa.pub as the default ssh key
    privateNetworking: true
    labels: {role: worker}
    tags:
      nodegroup-role: worker
    preBootstrapCommands:
      # Install SSM Agent,NFS,Openiscsi and similar packages
      - "yum install -y amazon-ssm-agent"
      - "yum install nfs-utils -y"
      - "yum --setopt=tsflags=noscripts install iscsi-initiator-utils -y"
      - "yum install curl -y"
      - 'echo "InitiatorName=$(/sbin/iscsi-iname)" &gt; /etc/iscsi/initiatorname.iscsi'
      - "systemctl enable iscsid"
      - "systemctl start iscsid"
      - "systemctl enable amazon-ssm-agent"
      - "systemctl start amazon-ssm-agent"
      # allow docker registries to be deployed as cluster service
      - "sed '2i \"insecure-registries\": [\"172.20.0.0/16\",\"10.100.0.0/16\"],'  /etc/docker/daemon.json"
      - "systemctl restart docker"
</code></pre>

<p>** Feel free to edit various settings. This example has what is required to install and setup AWS EKS cluster with Longhorn filesystem successfully.</p>

<p>You can check following page for other possible options to integrate into your cluster.yaml file: <br>
<a href="https://eksctl.io/usage/creating-and-managing-clusters/">https://eksctl.io/usage/creating-and-managing-clusters/</a></p>

<p>Export your AWS Secret and Access keys in Shell or configure it with <br><code>aws configure</code> </p>

<p>Push to following command to provision your AWS EKS cluster:  </p>

<pre><code>eksctl create cluster -f cluster.yaml  
</code></pre>

<p>It takes a while to complete, so sit back and monitor your terminal screen output.</p>

<p>Once AWS EKS Cluster is ready and you have tested kubectl we are going to use longhornctl tool to install and set up the preflight dependencies before installing Longhorn.  </p>

<pre><code>./longhornctl --kube-config='/home/nemke/.kube/config' install preflight
</code></pre>

<p>Output: <br>
<img src="https://nemanja.io/content/images/2024/10/2024-10-13_18-05.png" alt="How to deploy AWS EKS Cluster with Longhorn filesystem"></p>

<p>You may check if all software is installed using following command:  </p>

<pre><code>./longhornctl --kube-config='/home/nemke/.kube/config' check preflight
</code></pre>

<p>Output: <br>
<img src="https://nemanja.io/content/images/2024/10/2024-10-13_18-07.png" alt="How to deploy AWS EKS Cluster with Longhorn filesystem"></p>

<p><em>Words of wisdom here....</em> Well, if you plan to use HPA (Horizontal Pod Autoscaler) with AWS EKS Autoscaling groups, I advise that you make sure that all packages are getting installed within cluster.yaml file. Why? Well, eksctl tool is using CloudFormattion so any new node added will get packages installed before new node joins your fleet.</p>

<p>Next we will install Longhorn filesystem and configure it.</p>

<p>Add the Longhorn Helm repository:  </p>

<pre><code>helm repo add longhorn https://charts.longhorn.io  
</code></pre>

<p>Fetch the latest charts from the repository:  </p>

<pre><code>helm repo update  
</code></pre>

<p>Install Longhorn in the longhorn-system namespace:  </p>

<pre><code>helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --version 1.7.0  
</code></pre>

<p>Output: <br>
<img src="https://nemanja.io/content/images/2024/10/2024-10-13_18-14.png" alt="How to deploy AWS EKS Cluster with Longhorn filesystem"></p>

<p>To confirm that the deployment succeeded, run:  </p>

<pre><code>kubectl -n longhorn-system get pod  
</code></pre>

<p>One of the super neat features I've seen with Longhorn is its UI. To enable access to the Longhorn UI, you will need to set up an Ingress controller. Authentication to the Longhorn UI is not enabled by default. For information on creating an NGINX Ingress controller with basic authentication, refer to the following link: <a href="https://longhorn.io/docs/1.7.0/deploy/accessing-the-ui/longhorn-ingress">https://longhorn.io/docs/1.7.0/deploy/accessing-the-ui/longhorn-ingress</a></p>

<p>If you install Longhorn on a Kubernetes cluster with kubectl or Helm, you will need to create an Ingress to allow external traffic to reach the Longhorn UI.</p>

<p>Authentication is not enabled by default for kubectl and Helm installations. In these steps, you’ll learn how to create an Ingress with basic authentication using annotations for the nginx ingress controller.</p>

<p>1)    Create a basic auth file auth. It’s important the file generated is named auth (actually - that the secret has a key data.auth), otherwise the Ingress returns a 503.  </p>

<pre><code>USER=&lt;USERNAME_HERE&gt;; PASSWORD=&lt;PASSWORD_HERE&gt;; echo "${USER}:$(openssl passwd -stdin -apr1 &lt;&lt;&lt; ${PASSWORD})" &gt;&gt; auth  
</code></pre>

<p>2) Create a secret:  </p>

<pre><code>kubectl -n longhorn-system create secret generic basic-auth --from-file=auth  
</code></pre>

<p>3) Create an Ingress manifest longhorn-ingress.yml:  </p>

<pre><code>apiVersion: networking.k8s.io/v1  
kind: Ingress  
metadata:  
  name: longhorn-ingress
  namespace: longhorn-system
  annotations:
    # type of authentication
    nginx.ingress.kubernetes.io/auth-type: basic
    # prevent the controller from redirecting (308) to HTTPS
    nginx.ingress.kubernetes.io/ssl-redirect: 'false'
    # name of the secret that contains the user/password definitions
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    # message to display with an appropriate context why the authentication is required
    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required '
    # custom max body size for file uploading like backing image uploading
    nginx.ingress.kubernetes.io/proxy-body-size: 10000m
spec:  
  ingressClassName: nginx
  rules:
  - host: longhorn.example.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: longhorn-frontend
            port:
              number: 80
</code></pre>

<p>** As you may notice I've used longhorn.example.com as a URL to login to the Longhorn UI from my browser. You may adjust it to real domain name with DNS, but it is not required because we can use /etc/hosts mechanism to access it.</p>

<p>4) Create the Ingress:  </p>

<pre><code>kubectl -n longhorn-system apply -f longhorn-ingress.yml  
</code></pre>

<p>You will need to create an ELB (Elastic Load Balancer) to expose the nginx Ingress controller to the Internet. Additional costs may apply.</p>

<p>5) Let's create ELB using following command:  </p>

<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/aws/deploy.yaml  
</code></pre>

<p>** You may use Helm to deploy this or adjust ELB settings, but for the purpose of this Demo and UI I've used default values.</p>

<p>You can now navigate to AWS Console --> EC2 --> Load Balancer area, and you will notice that Load Balancer is being provisioned: <br>
<img src="https://nemanja.io/content/images/2024/10/2024-10-13_18-24.png" alt="How to deploy AWS EKS Cluster with Longhorn filesystem"></p>

<p>Grab two IP addresses assigned to your new Load Balancer, and adjust your /etc/hosts file with the following data:  </p>

<pre><code>52.86.18.142 longhorn.example.com  
44.208.155.179 longhorn.example.com  
</code></pre>

<p>You can dig DNS endpoint to get them.</p>

<p>Navigate to the following URL to get to the Longhorn UI: <br>
<a href="https://longhorn.example.com">https://longhorn.example.com</a> <br>
(ignore SSL errors of course)</p>

<p>Enter username and password that you have defined in the <code>Create a basic auth file auth</code> step.</p>

<p>Output: <br>
<img src="https://nemanja.io/content/images/2024/10/2024-10-13_18-28.png" alt="How to deploy AWS EKS Cluster with Longhorn filesystem"></p>

<p>Welcome to Longhorn UI. No errors, and as you can see 8 nodes in the group (4 workers us-east-1a and 4 workers in us-east-1b). Feel free to explore <a href="https://longhorn.io/docs/1.7.0/">https://longhorn.io/docs/1.7.0/</a> docs pages, but the most basic is to visit Settings --> General area and adjust what is default number of replicas you wish to keep for each Volume that is created followed by PV/PVC from your Kubernetes deployment.</p>

<p>That's it for now. In next articles, we will use this setup to provision MariaDB server with Master/Slave replicas,and then MariaDB server "Galera" with Master/Master replicas.</p>

<p>Hope this article helps! Good luck!</p>]]></content:encoded></item><item><title><![CDATA[From Yellow to Green: How to Achieve a Healthy Elasticsearch Cluster]]></title><description><![CDATA[<p>I often see ElasticSearch booted into single-node mode and show cluster health in a "Yellow" mode. I want to explain why that is happening today and what we can do to prevent/fix that, even in single-node mode.</p>

<p>Elasticsearch is a highly scalable, open-source search and analytics engine built on</p>]]></description><link>https://nemanja.io/from-yellow-to-green-how-to-achieve-a-healthy-elasticsearch-cluster/</link><guid isPermaLink="false">f8325de5-ec0a-4372-9d74-b4a9f4447b50</guid><category><![CDATA[rest]]></category><category><![CDATA[elasticsearch]]></category><category><![CDATA[elasticsearch status]]></category><category><![CDATA[indices]]></category><category><![CDATA[es indices]]></category><category><![CDATA[es cluster health]]></category><category><![CDATA[es yellow]]></category><category><![CDATA[es green]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Wed, 22 May 2024 16:52:40 GMT</pubDate><media:content url="http://nemanja.io/content/images/2024/05/es-yellow-to-green.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://nemanja.io/content/images/2024/05/es-yellow-to-green.jpg" alt="From Yellow to Green: How to Achieve a Healthy Elasticsearch Cluster"><p>I often see ElasticSearch booted into single-node mode and show cluster health in a "Yellow" mode. I want to explain why that is happening today and what we can do to prevent/fix that, even in single-node mode.</p>

<p>Elasticsearch is a highly scalable, open-source search and analytics engine built on top of Apache Lucene. It is designed for horizontal scalability and near real-time search and analytics capabilities. Elasticsearch is commonly used for various use cases, including log and event data analysis, full-text search, and business analytics. Its powerful querying capabilities and distributed nature make it a preferred choice for handling large datasets and complex search operations.</p>

<p><strong>Cluster Health:</strong> <br></p>

<p>In Elasticsearch, data is distributed across multiple nodes in a cluster to ensure high availability, fault tolerance, and load balancing. Cluster health status indicates the overall well-being and operational status of an Elasticsearch cluster. The cluster health can be one of three states:</p>

<pre><code>Green: All primary and replica shards are allocated. This is the optimal state, indicating that the cluster is fully functional with redundancy.

Yellow: All primary shards are allocated, but some or all replica shards are not. While the cluster is operational, it lacks redundancy, which means it is vulnerable to data loss if a node fails.

Red: One or more primary shards are unassigned, leading to potential data loss and unavailability of some data.
</code></pre>

<p>Maintaining a green cluster health status is crucial because it ensures that your data is not only available but also redundant, providing resilience against node failures and ensuring high availability of your services. A healthy cluster also optimizes performance and reliability, which are essential for search and analytics operations.</p>

<p>Now, guess you have a Magento 2 installation, you reindex but when you execute following command your Cluster state (yes, even in a single node) is always Yellow.</p>

<p>Check Current Cluster Health  </p>

<pre><code>curl -X GET "localhost:9200/_cluster/health?pretty"  
</code></pre>

<p>We can Analyze Indices and Shards using following command  </p>

<pre><code>curl -X GET "localhost:9200/_cat/indices/?pretty"  
</code></pre>

<p>To bring your Elasticsearch cluster to a green state from the current yellow state, you need to address the issue of unassigned shards. In your case, this typically occurs because your cluster has only one data node, which means there is no other node to allocate replica shards to.</p>

<p><strong>Steps to Bring the Cluster to a Green State</strong></p>

<p>Idea 1:</p>

<ul>
<li>Add More Data Nodes: <br>
The best way to achieve a green state is to add more data nodes to the cluster to properly allocate replicas. Here's a quick guide on how to add a node:</li>
</ul>

<p>1) Install Elasticsearch on another server. <br>
2) Configure the new node to join the existing cluster by setting the same cluster.name in the elasticsearch.yml file. <br>
3) Set the node.name to a unique name. <br>
4) Ensure network settings (network.host, discovery.seed_hosts, etc.) are correctly configured to allow the nodes to communicate. <br>
5) Start Elasticsearch on the new server.</p>

<p>Idea 2 (and stay on the single node):</p>

<ul>
<li>Adjust Replica Settings:
If adding more nodes is not feasible, you can adjust the replica settings to 0 for indices with unassigned replicas. This will bring the cluster to a green state, but you will lose redundancy. Here's how you can do it:</li>
</ul>

<pre><code># Set the number of replicas to 0 for a specific index
curl -X PUT "localhost:9200/&lt;index_name&gt;/_settings" -H 'Content-Type: application/json' -d'  
{
  "index": {
    "number_of_replicas": 0
  }
}'
</code></pre>

<p><strong>The next step is to verify Cluster Health</strong> <br></p>

<p>After making these changes, verify the cluster health:  </p>

<pre><code>curl -X GET "localhost:9200/_cluster/health?pretty"  
</code></pre>

<p>Boom! It's in the green state:  </p>

<pre><code>{"acknowledged":true}[root@server ~]# curl -X GET "localhost:9200/_cluster/health?pretty"
{
  "cluster_name": "magento",
  "status": "green",
  "timed_out": false,
  "number_of_nodes": 1,
  "number_of_data_nodes": 1,
  "active_primary_shards": 7,
  "active_shards": 7,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 0,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 100.0
}
[root@server ~]# curl -X GET "localhost:9200/_cat/indices/?pretty"
green open .geoip_databases                                          jMYDTc1NRZuQDZ9tGtyGSQ 1 0   34    34 31.3mb 31.3mb  
green open xyz_production_22052024_amasty_elastic_popup_data_1_v2 VoF-bkUDQEef-HeFzS-4og 1 0  196     0 91.4kb 91.4kb  
green open xyz_production_22052024_product_1_v2                   e7xmqI3XTgWprQB4EWuD8g 1 0 4352 64006 36.1mb 36.1mb  
</code></pre>

<p>By following these steps, you should be able to bring your Elasticsearch cluster to a green state, ensuring that all shards are allocated and there are no unassigned shards. Now, index names can change, so I suggest extending core functionalities to add another function, which would be helpful each time indexing is running.</p>

<p>I hope this article helps. Good luck!</p>]]></content:encoded></item><item><title><![CDATA[Split Read and Writes in Magento 2 using AWS RDS and ProxySQL]]></title><description><![CDATA[<p>Hello,</p>

<p>I have been exploring options lately to scale MySQL and tested this solution. In my tests, I am using small db.t2.medium AWS RDS cluster with Write and Read replica. </p>

<p>For test purposes following components were used: <br>
- AWS account setup of RDS with public access with IP-only</p>]]></description><link>https://nemanja.io/split-read-and-writes-using-aws-rds-and-proxy/</link><guid isPermaLink="false">22fdfb65-ce40-4bcc-ac56-901ab577be00</guid><category><![CDATA[proxysql]]></category><category><![CDATA[aws]]></category><category><![CDATA[aws rds]]></category><category><![CDATA[rds]]></category><category><![CDATA[horizontal mysql scaling]]></category><category><![CDATA[mysql scaling]]></category><category><![CDATA[read replica]]></category><category><![CDATA[write replica]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Sun, 26 Mar 2023 18:09:19 GMT</pubDate><media:content url="http://nemanja.io/content/images/2023/03/master-slave-replication-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://nemanja.io/content/images/2023/03/master-slave-replication-1.jpg" alt="Split Read and Writes in Magento 2 using AWS RDS and ProxySQL"><p>Hello,</p>

<p>I have been exploring options lately to scale MySQL and tested this solution. In my tests, I am using small db.t2.medium AWS RDS cluster with Write and Read replica. </p>

<p>For test purposes following components were used: <br>
- AWS account setup of RDS with public access with IP-only limitation. <br>
- Magento 2 gitpod instance (<a href="https://github.com/nemke82/magento2gitpod">https://github.com/nemke82/magento2gitpod</a>) started. <br>
- Modified m2-install.sh file to connect to Remote AWS RDS environment and install fresh Magento 2 install with Performance fixtures data (small). <br>
- ProxySQL installed on the Gitpod environment and configured</p>

<p>Before you start, you have to create a test database. In my example:  </p>

<pre><code>mysql -u nemanja -h nemanja-instance-1.cusgvuriflp3.us-east-1.rds.amazonaws.com -pRandomPassword123 -e 'create database nemanja;'  
</code></pre>

<p>Then we can add privileges to that database:  </p>

<pre><code>GRANT ALL PRIVILEGES ON nemanja.* TO 'nemanja'@'%';  
FLUSH PRIVILEGES;  
exit;  
</code></pre>

<p>Modify m2-install.sh (if you are testing on the Magento 2 Gitpod platform) file with the new MySQL hostname endpoint, username, and password and execute the installation.</p>

<p>Once the installation of fresh Magento 2 is ready and data is in your AWS RDS instance, we will proceed with ProxySQL configuration.</p>

<p>To set up ProxySQL to split SELECT queries between an AWS RDS master instance and its read replicas, follow these steps:</p>

<p><strong>Install ProxySQL</strong> <br>
Install ProxySQL on an EC2 instance or a server within your VPC, so it has access to your RDS instances.</p>

<p>URL: <a href="https://proxysql.com/documentation/installing-proxysql/">https://proxysql.com/documentation/installing-proxysql/</a></p>

<p><strong>Configure ProxySQL</strong><br>
Edit the ProxySQL configuration file, usually located at /etc/proxysql.cnf, and configure it according to your needs. An example configuration is shown below:</p>

<pre><code>datadir="/var/lib/proxysql"  
admin_variables=  
{
  admin_credentials="admin:admin"
  mysql_ifaces="0.0.0.0:6032"
}
mysql_variables=  
{
  threads=4
  max_connections=2048
  default_query_delay=0
  default_query_timeout=36000000
  have_compress=true
  poll_timeout=2000
  interfaces="0.0.0.0:6033"
  default_schema="information_schema"
  stacksize=1048576
  server_version="5.7.27"
  connect_timeout_server=3000
  monitor_history=600000
  monitor_connect_interval=60000
  monitor_ping_interval=10000
  ping_interval_server_msec=10000
  ping_timeout_server=200
  commands_stats=true
  sessions_sort=true
}
</code></pre>

<p><strong>Start the ProxySQL service:</strong></p>

<pre><code>sudo systemctl start proxysql  
</code></pre>

<p>Next task is to Configure RDS instances in ProxySQL.</p>

<p>Let's access the ProxySQL admin interface:  </p>

<pre><code>mysql -u admin -p -h 127.0.0.1 -P 6032 --prompt='ProxySQLAdmin&gt; '  
</code></pre>

<p>Default password is "admin" for Fresh ProxySQL installation.</p>

<p><strong>Add your RDS master and read replica instances:</strong> <br></p>

<pre><code>-- Add master instance
INSERT INTO mysql_servers(hostgroup_id, hostname, port)  
VALUES (0, 'your_master_instance_endpoint', 3306);

-- Add read replica instance
INSERT INTO mysql_servers(hostgroup_id, hostname, port)  
VALUES (1, 'your_read_replica_instance_endpoint', 3306);

LOAD MYSQL SERVERS TO RUNTIME;  
SAVE MYSQL SERVERS TO DISK;  
</code></pre>

<p>Here we start with one master and 1 read replica endpoint. We can scale later and add more easily.</p>

<p><strong>Set up query rules</strong> <br></p>

<p>The next step is to create rules to route SELECT queries to the read replica:</p>

<pre><code>INSERT INTO mysql_query_rules (rule_id, active, match_pattern, destination_hostgroup, apply)  
VALUES (1, 1, '^SELECT.*FOR UPDATE$', 0, 1),  
       (2, 1, '^SELECT', 1, 1);             

LOAD MYSQL QUERY RULES TO RUNTIME;  
SAVE MYSQL QUERY RULES TO DISK;  
</code></pre>

<p>-- Send 'SELECT ... FOR UPDATE' to the master <br>
-- Send other SELECTs to the read replica <br>
You can extend this with anything you think could benefit route SELECT queries to the read replica.</p>

<p><strong>Configure user credentials</strong><br></p>

<p>Add the database user credentials to ProxySQL:  </p>

<pre><code>INSERT INTO mysql_users(username, password, default_hostgroup)  
VALUES ('your_username', 'your_password', 0);

LOAD MYSQL USERS TO RUNTIME;  
SAVE MYSQL USERS TO DISK;  
</code></pre>

<p>We will use the same username and password we used to define when creating the AWS RDS cluster. I advise using anything except the "admin" username, as that is the default username for ProxySQL, which we could change later, but is not crucial in the current proof-of-concept demo.</p>

<p><strong>Monitor and adjust</strong><br>
Monitor ProxySQL's performance and adjust settings as needed. You can add more read replicas and change the load balancing settings for better performance.</p>

<p>Start by adjusting the env.php file. In the 'host' field, enter:  </p>

<pre><code>127.0.0.1:6033  
</code></pre>

<p>We could experiment here by adding Socket instead of a loopback which I think should optimize this area better.</p>

<p>Next, adjust the 'username' and 'password' fields with the one you created above.</p>

<p>Now, your application should connect to ProxySQL on port 6033 instead of directly connecting to the RDS instances. ProxySQL will route SELECT queries to the read replica and other write queries to the master instance.</p>

<p><strong>-- Horizontally scaling Read replicas --</strong> <br>
To add more read replicas and adjust the load balancing settings for better performance, follow these steps:</p>

<p><strong>Add more read replicas in AWS RDS</strong> <br>
First, create additional read replicas of your master RDS instance from the AWS Management Console or the AWS CLI.</p>

<p><strong>Add read replicas to ProxySQL</strong><br>
Connect to the ProxySQL admin interface:  </p>

<pre><code>mysql -u admin -p -h 127.0.0.1 -P 6032 --prompt='ProxySQLAdmin&gt; '  
</code></pre>

<p>Add the new read replica instances to ProxySQL, assigning them to the same hostgroup (1 in this example):  </p>

<pre><code>INSERT INTO mysql_servers(hostgroup_id, hostname, port)  
VALUES (1, 'your_additional_read_replica_instance_endpoint_1', 3306),  
       (1, 'your_additional_read_replica_instance_endpoint_2', 3306),
       ...;

LOAD MYSQL SERVERS TO RUNTIME;  
SAVE MYSQL SERVERS TO DISK;  
</code></pre>

<p>-- Add additional read replica instances</p>

<p><strong>Configure load balancing settings</strong><br>
By default, ProxySQL uses the round-robin algorithm to balance the load among the read replicas. You can adjust the weight of each server to control the traffic distribution. A higher weight means more traffic will be sent to that server.</p>

<p>To modify the weight of a read replica, update the weight column in the mysql_servers table:</p>

<pre><code>UPDATE mysql_servers SET weight = new_weight  
WHERE hostgroup_id = 1 AND hostname = 'your_read_replica_instance_endpoint';

LOAD MYSQL SERVERS TO RUNTIME;  
SAVE MYSQL SERVERS TO DISK;  
</code></pre>

<p>Here we can say for read<em>replica</em>instance_endpoint. We will put 50, then next one, we can put 50 if two read replicas are added. Overall that makes them split traffic even. Monitor the performance of your read replicas and adjust their weights accordingly. You can use the ProxySQL stats schema to gather information about traffic distribution and query performance.</p>

<p><img src="https://nemanja.io/content/images/2023/03/2023-03-26_20-01.png" alt="Split Read and Writes in Magento 2 using AWS RDS and ProxySQL"></p>

<p>For example, to view the traffic distribution among the read replicas, you can run:  </p>

<pre><code>select * from stats_mysql_connection_pool;  
</code></pre>

<p>You can see several queries sent and other useful data: <br>
<img src="https://nemanja.io/content/images/2023/03/2023-03-26_20-03.png" alt="Split Read and Writes in Magento 2 using AWS RDS and ProxySQL"></p>

<p>Adjust the read replica weights based on the performance metrics and your application's requirements. Continue to monitor and fine-tune the load balancing settings to achieve optimal performance.</p>

<p>Remember that adding more read replicas may increase costs but can help distribute read traffic and improve performance for read-heavy workloads. Ensure you monitor your instances' resource usage to balance performance and price. All the above can be partially or fully automated with a systemd or cron task checking the state of the AWS RDS server and then scaling up replicas based on selected patterns.</p>

<p>We can also enable ProxySQL UI and watch stats in the browser. <br>
Ref article: <a href="https://proxysql.com/documentation/http-web-server/">https://proxysql.com/documentation/http-web-server/</a></p>

<p>Connect to the ProxySQL admin interface:  </p>

<pre><code>mysql -u admin -p -h 127.0.0.1 -P 6032 --prompt='ProxySQLAdmin&gt; '  
</code></pre>

<p>Then:  </p>

<pre><code>SET admin-web_enabled='true';  
LOAD ADMIN VARIABLES TO RUNTIME;  
</code></pre>

<p>I usually use SSH tunnel to connect and get to the UI, for example:  </p>

<pre><code>ssh -p &lt;SSHport&gt; -L 6080:127.0.0.1:6080 &lt;SSHuser&gt;@&lt;SSHhost&gt;  
</code></pre>

<p>Then you can open up your favorite browser and visit <a href="https://127.0.0.1:6080">https://127.0.0.1:6080</a> <br>
username and password by default are admin:admin</p>

<p>Where are problems here? <br> <br>
A typical issue with Indexer is the following:  </p>

<pre><code>Product Price index process error during indexation process:  
SQLSTATE[Y0000]: &lt;&lt;Unknown error&gt;&gt;: 9006 ProxySQL Error: connection is locked to hostgroup 0 but trying to reach hostgroup 1, query was: SELECT `i`.`entity_id`, `o`.`option_id` FROM `catalog_product_index_price_temp` AS `i`  
 INNER JOIN `catalog_product_entity` AS `e` ON e.entity_id = i.entity_id
 INNER JOIN `catalog_product_option` AS `o` ON o.product_id = e.entity_id
Catalog Search index process error during indexation process:  
SQLSTATE[Y0000]: &lt;&lt;Unknown error&gt;&gt;: 9006 ProxySQL Error: connection is locked to hostgroup 0 but trying to reach hostgroup 1, query was: SELECT `indexer_state`.* FROM `indexer_state` WHERE (`indexer_state`.`indexer_id`='catalogsearch_fulltext')  
</code></pre>

<p><strong>To fix:</strong> <br>
As is explained in the documentation <br> <br>
<a href="https://proxysql.com/documentation/global-variables/mysql-variables/#mysql-set">https://proxysql.com/documentation/global-variables/mysql-variables/#mysql-set</a><em>query</em>lock<em>on</em>hostgroup</p>

<p>So to resolve this issue on the single primary server on Group replication, try to do this.</p>

<p>Log into ProxySql admin monitor.  </p>

<pre><code>set mysql-set_query_lock_on_hostgroup=0;  
load mysql variables to runtime;  
save mysql variables to disk;  
</code></pre>

<p>Not mandatory, but the ProxySql service can be restarted to check if variables are saved on disk. Every setting here was done with "hot-reload". The advantage of ProxySQL service is that we can use proxysql.cnf file and add them as default instead.</p>

<p>Stress testing this just phase one. I will update you with additional how time goes on. Using the tool <strong>mysqlslap</strong> <br>(<a href="https://dev.mysql.com/doc/refman/8.0/en/mysqlslap.html">https://dev.mysql.com/doc/refman/8.0/en/mysqlslap.html</a>), I've tried to stress/load test this with 100 concurrent users sending some generic SELECT query to see how READS are spread across the board. It was to select all Quote Items:  </p>

<pre><code>gitpod /workspace/magento2gitpod $ mysqlslap -unemanja -P6033 -p -h127.0.0.1  --concurrency=100 --iterations=20 --create-schema=nemanja --query="SELECT * from quote_item" --verbose  
Enter password:  
</code></pre>

<p><strong>Benchmark</strong> <br>
        The average number of seconds to run all queries: 0.369 seconds<br>
        Minimum number of seconds to run all queries: 0.218 seconds <br>
        Maximum number of seconds to run all queries: 1.144 seconds <br>
        Number of clients running queries: 100 <br>
        The average number of queries per client: 1</p>

<p>Next, Benchmark using the same query was a lot faster since the query entered Query Cache, which ProxySQL provides by default even if AWS RDS is 8.0.26 used.</p>

<p><strong>Benchmark</strong> <br>
        The average number of seconds to run all queries: 0.292 seconds <br>
        Minimum number of seconds to run all queries: 0.184 seconds <br>
        Maximum number of seconds to run all queries: 0.494 seconds <br>
        Number of clients running queries: 100 <br>
        The average number of queries per client: 1</p>

<p>You can load separate terminals or screens (monitor) and watch how SELECT queries from the same Load/Stress test are spread across two Read replicas that I've created in this Concept Proof demo:  </p>

<pre><code>watch -n 1 "mysql -u admin -padmin -h 127.0.0.1 -P 6032 -e 'select * from stats_mysql_connection_pool;'"  
</code></pre>

<p>I hope this article was helpful. Good luck!</p>]]></content:encoded></item><item><title><![CDATA[Invalid host header PWA Studio]]></title><description><![CDATA[<p>Hello,</p>

<p>Recently, I discovered the issue when trying to start the PWA Studio development environment.</p>

<p><strong>Invalid host header</strong></p>

<p>command used: yarn watch</p>

<p><img src="https://nemanja.io/content/images/2022/10/2022-10-25_13-17.png" alt="alt"></p>

<p>To fix this, you can add the following to the node_modules/@magento/pwa-buildpack/lib/WebpackTools/PWADevServer.js</p>

<pre><code>disableHostCheck: true,  
</code></pre>

<p><img src="https://nemanja.io/content/images/2022/10/2022-10-25_13-19.png" alt="alt"></p>

<p>Save the file, start yarn watch again and</p>]]></description><link>https://nemanja.io/invalid-host-header-pwa-studio/</link><guid isPermaLink="false">64b06404-c96d-428f-b983-453f69f084c5</guid><category><![CDATA[pwa studio]]></category><category><![CDATA[pwa]]></category><category><![CDATA[invalid host header]]></category><category><![CDATA[invalid header]]></category><category><![CDATA[pwadevserver]]></category><category><![CDATA[webpack dev]]></category><category><![CDATA[webpack]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Tue, 25 Oct 2022 11:23:58 GMT</pubDate><media:content url="http://nemanja.io/content/images/2022/10/pwa.jpeg" medium="image"/><content:encoded><![CDATA[<img src="http://nemanja.io/content/images/2022/10/pwa.jpeg" alt="Invalid host header PWA Studio"><p>Hello,</p>

<p>Recently, I discovered the issue when trying to start the PWA Studio development environment.</p>

<p><strong>Invalid host header</strong></p>

<p>command used: yarn watch</p>

<p><img src="https://nemanja.io/content/images/2022/10/2022-10-25_13-17.png" alt="Invalid host header PWA Studio"></p>

<p>To fix this, you can add the following to the node_modules/@magento/pwa-buildpack/lib/WebpackTools/PWADevServer.js</p>

<pre><code>disableHostCheck: true,  
</code></pre>

<p><img src="https://nemanja.io/content/images/2022/10/2022-10-25_13-19.png" alt="Invalid host header PWA Studio"></p>

<p>Save the file, start yarn watch again and enjoy development that can refresh content without compiling.</p>

<p>I hope this helps.</p>]]></content:encoded></item><item><title><![CDATA[Magento 2 Debug Varnish cache in production mode]]></title><description><![CDATA[<p>Hello,</p>

<p>By default, Magento 2 offers to debug Varnish if pages are in MISS or HIT mode when Developer mode is enabled. You can debug the store in two ways in the Production mode.</p>

<p>1) <strong>varnishlog</strong> <br>
varnishlog is a CLI tool written to read the output from the Varnish service</p>]]></description><link>https://nemanja.io/magento-2-debug-varnish-cache-in-production-mode/</link><guid isPermaLink="false">76e7f02f-38c3-445e-bfc2-e741bcb60d99</guid><category><![CDATA[varnish]]></category><category><![CDATA[varnish hit]]></category><category><![CDATA[varnish miss]]></category><category><![CDATA[magento 2 varnish]]></category><category><![CDATA[debug varnish]]></category><category><![CDATA[debug full page cache]]></category><category><![CDATA[full page cache magento 2]]></category><category><![CDATA[magento 2 fpc]]></category><category><![CDATA[debug varnish production]]></category><category><![CDATA[magento 2 cache production]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Fri, 16 Sep 2022 19:51:41 GMT</pubDate><media:content url="http://nemanja.io/content/images/2022/09/1511065546varnish-hit-1024x416.gif" medium="image"/><content:encoded><![CDATA[<img src="http://nemanja.io/content/images/2022/09/1511065546varnish-hit-1024x416.gif" alt="Magento 2 Debug Varnish cache in production mode"><p>Hello,</p>

<p>By default, Magento 2 offers to debug Varnish if pages are in MISS or HIT mode when Developer mode is enabled. You can debug the store in two ways in the Production mode.</p>

<p>1) <strong>varnishlog</strong> <br>
varnishlog is a CLI tool written to read the output from the Varnish service in format order. You can print output on the terminal, parse it to other formats, stream outside of the container or service, or similar. For more details about the varnishlog tool, you can read the following <a href="https://varnish-cache.org/docs/trunk/reference/varnishlog.html">https://varnish-cache.org/docs/trunk/reference/varnishlog.html</a> address.</p>

<p>To limit output only to your IP address, we can use the following CLI command:  </p>

<pre><code>varnishlog -q "ReqHeader eq 'X-Real-IP: 178.223.40.243'"  
</code></pre>

<p>Note: Update your real IP address. You can use X-Real-IP or X-Forward-For depending on your Upstream and Webserver settings.</p>

<p>The standard output has a lot of information, but mostly it starts with:  </p>

<pre><code>&lt;&lt; Request  &gt;&gt;  
</code></pre>

<p>Then it has a timestamp, ReqStart, ReqMethod (GET, POST...), ReqURL, and ReqProtocol, but the longest part is ReqHeader and RespHeader section. varnishlog is not the tool you want to collect logs to store. However, it’s the perfect choice to obtain debugging data thanks to the indecent amount of data it produces.</p>

<p>The one I find important to debug the Varnish cache is:  </p>

<pre><code>VCL_call  
</code></pre>

<p>It can end up as DELIVER (not cached) or HIT (cached). You can read Varnish documentation pages <a href="https://book.varnish-software.com/4.0/chapters/Examining_Varnish_Server_s_Output.html">Varnish Output</a> on how many other options can be used. Typically followed by VCL_call Deliver is RespHeader Age: 0</p>

<p><img src="https://nemanja.io/content/images/2022/09/2022-09-16_17-56.png" alt="Magento 2 Debug Varnish cache in production mode">
<img src="https://nemanja.io/content/images/2022/09/2022-09-16_17-55.png" alt="Magento 2 Debug Varnish cache in production mode"></p>

<p>2) If you find this hard and not efficient enough (yeah, I know the browser/inspect header sounds better), we can do the following to debug and make it easier.</p>

<ul>
<li>Under <em>sub vcl_deliver {</em> section comment few lines.</li>
</ul>

<pre><code> //if (resp.http.X-Magento-Debug) {
        if (resp.http.x-varnish ~ " ") {
            set resp.http.X-Magento-Cache-Debug = "HIT";
        } else {
            set resp.http.X-Magento-Cache-Debug = "MISS";
        }
    //} else {
       // # unset resp.http.Age;
    //}
</code></pre>

<p>Note: by default, it's without //</p>

<p>Save the VCL file, test, and reload your Varnish cache. You will see there is a new RespHeader added on page load.</p>

<p><img src="https://nemanja.io/content/images/2022/09/2022-09-16_21-46.png" alt="Magento 2 Debug Varnish cache in production mode"></p>

<p>What is <strong>x-magento-cache-debug</strong> you can read here: <br>
<a href="https://experienceleague.adobe.com/docs/commerce-operations/configuration-guide/cache/varnish/config-varnish-final.html">https://experienceleague.adobe.com/docs/commerce-operations/configuration-guide/cache/varnish/config-varnish-final.html</a></p>

<p>That's it! Now you follow this simple trick, leave your store in the Production mode, and still debug your Varnish cache if pages are HIT or MISS.</p>

<p>Good luck!</p>]]></content:encoded></item><item><title><![CDATA[Tools to analyze Redis databases]]></title><description><![CDATA[<p>Hello,</p>

<p>Today I would like to explain a simple trick of how you can search and analyze any data stored in Redis and then export it in JSON format to filter better. </p>

<p>You can get the dump if you export data on the production node/server.rdb file from Redis</p>]]></description><link>https://nemanja.io/tools-to-analyze-redis-databases/</link><guid isPermaLink="false">79d06b11-651e-4319-9dee-9542bb7247da</guid><category><![CDATA[redis]]></category><category><![CDATA[redis database]]></category><category><![CDATA[redis keys]]></category><category><![CDATA[redis value]]></category><category><![CDATA[redis commander]]></category><category><![CDATA[export redis]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Thu, 08 Sep 2022 15:45:17 GMT</pubDate><media:content url="http://nemanja.io/content/images/2022/09/1_77Vo1RFQ-5DcLKdeHbb2-A.png" medium="image"/><content:encoded><![CDATA[<img src="http://nemanja.io/content/images/2022/09/1_77Vo1RFQ-5DcLKdeHbb2-A.png" alt="Tools to analyze Redis databases"><p>Hello,</p>

<p>Today I would like to explain a simple trick of how you can search and analyze any data stored in Redis and then export it in JSON format to filter better. </p>

<p>You can get the dump if you export data on the production node/server.rdb file from Redis root folder (CONFIG GET DIR), or if Persistance mode is not enabled (something ephemeral), you can extract data using the <a href="https://github.com/r043v/rdd">https://github.com/r043v/rdd</a> tool.</p>

<p>RDB is a dump. You can call save to force an Rdb. It will be stored in the DB filename setting you have, or dump.rdb in the current working directory if that setting is missing.</p>

<p>More Info: <a href="http://redis.io/topics/persistence">http://redis.io/topics/persistence</a></p>

<p>Now that rdb dump is ready on your local computer or any other device/server, you can start by installing Redis-Commander.</p>

<p>What is Redis Commander? <br>
Redis-Commander is a node.js web application used to view, edit, and manage a Redis Database.</p>

<p>More Info: <a href="http://joeferner.github.io/redis-commander/">http://joeferner.github.io/redis-commander/</a></p>

<p>Super simple to get going with NPM:  </p>

<pre><code>npm install -g redis-commander  
</code></pre>

<p>Start it with:  </p>

<pre><code>redis-commander  
</code></pre>

<p>Then point your browser to the local computer's address in the console. By default, access with browser at <a href="http://127.0.0.1:8081">http://127.0.0.1:8081</a> address.</p>

<p>If you are using any Remote IDE environment like Gitpod (<a href="https://gitpod.io">https://gitpod.io</a>), you can get the address exposed when Redis-commander service starts.</p>

<p>Make sure you have Redis installed. I will not write about how to install the Redis server with cli tools but make sure it started on your local computer or remote device/server.</p>

<p>To check if Redis Commander works, load UI, and you must see it connected to your Redis server. <br>
<img src="https://nemanja.io/content/images/2022/09/2022-09-08_17-32.png" alt="Tools to analyze Redis databases"></p>

<p>If you wish to start the server with the Redis dump you exported from your server, yes, you can do it stop this one and start the server again with the following command:  </p>

<pre><code>/usr/bin/redis-server --port 6379 --dbfilename backup_of_master.rdb
</code></pre>

<p>** my redis-server is at /usr/bin path, but you can type <code>whereis redis-server</code> to see where it is installed.</p>

<p>You can start analyzing data. There is also a helper who can help you with certain commands. One of the best functionalities here is that we can now export data in JSON format and analyze values much better using sublime or any other popular editor or even Online using <a href="https://jsoneditoronline.org/classic/index.html#left=local.sugeve">https://jsoneditoronline.org/classic/index.html#left=local.sugeve</a> website. Neat!</p>

<p>2nd popular application to analyze data from Redis is <a href="https://resp.app/">https://resp.app/</a></p>

<p>Good luck, happy troubleshooting!</p>]]></content:encoded></item><item><title><![CDATA[Manage Remote Kubernetes clusters with VS Code IDE editor]]></title><description><![CDATA[<p>Hello,</p>

<p>Today I want to show how you can use your local VS Code IDE editor to manage Remote Kubernetes Clusters and connect to running containers, browse local files/folders, and works with MySQL, Redis, ElasticSearch, Debug, and a lot more!</p>

<p>I will skip the part where I wrote one</p>]]></description><link>https://nemanja.io/manage-remote-kubernetes-clusters-with-vs-code-ide-editor/</link><guid isPermaLink="false">17dc330e-3d47-48dd-9e77-a823b0ede154</guid><category><![CDATA[vscode]]></category><category><![CDATA[vscode ide]]></category><category><![CDATA[kubernetes]]></category><category><![CDATA[manage]]></category><category><![CDATA[remote]]></category><category><![CDATA[vs code kubernetes]]></category><category><![CDATA[kubernetes tool]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Sat, 30 Jul 2022 10:10:10 GMT</pubDate><media:content url="http://nemanja.io/content/images/2022/07/capa-kubernetes-400x280.png" medium="image"/><content:encoded><![CDATA[<img src="http://nemanja.io/content/images/2022/07/capa-kubernetes-400x280.png" alt="Manage Remote Kubernetes clusters with VS Code IDE editor"><p>Hello,</p>

<p>Today I want to show how you can use your local VS Code IDE editor to manage Remote Kubernetes Clusters and connect to running containers, browse local files/folders, and works with MySQL, Redis, ElasticSearch, Debug, and a lot more!</p>

<p>I will skip the part where I wrote one of the possible methods how to connect your kubectl tool to the Remote Kubernetes cluster running in the Cloud or Bare metal servers, and that is already described briefly in the <a href="https://nemanja.io/manage-remote-kubernetes-clusters-with-kubectl-and-lens/">https://nemanja.io/manage-remote-kubernetes-clusters-with-kubectl-and-lens/</a> article, so please open it now if you need to set up that parts on your local computer.</p>

<p>Please visit the following URL to set up the Remote Kubernetes cluster <a href="https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools">https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools</a> address or Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter.</p>

<pre><code>ext install ms-kubernetes-tools.vscode-kubernetes-tools  
</code></pre>

<p>Once the module is installed, you can click on the left sidebar on the icon and start exploring.</p>

<p><img src="https://nemanja.io/content/images/2022/07/2022-07-30_11-36.png" alt="Manage Remote Kubernetes clusters with VS Code IDE editor"></p>

<p>You will notice that Cluster is kubernetes-admin@kubernetes. That is the one you set up using my tutorial with the SSH Tunnel mechanism. This extension has very big documentation of what you can actually do with it from your VS Code editor. Feel free to review <a href="https://github.com/vscode-kubernetes-tools/vscode-kubernetes-tools">https://github.com/vscode-kubernetes-tools/vscode-kubernetes-tools</a> URL. I will cover just the basics of what you need, for example, to connect/attach to a container within a namespace, and one of the best features is "Attach Visual Studio Code" which remotely connects you to the specific container in a separate VS Code window, and you can work there same as you do locally and even install extensions on that container which can be used again next time.</p>

<p>Press CTRL+SHIFT+P keys, then type <strong>kubernetes</strong> <br>
<img src="https://nemanja.io/content/images/2022/07/2022-07-30_11-45.png" alt="Manage Remote Kubernetes clusters with VS Code IDE editor"></p>

<p>Select <strong>Kubernetes: Use Namespace</strong> option and start searching your namespace name, which I assume you already know. You can also explore all namespaces (slower if you have a lot of them) and then search from that area simply by typing keywords.</p>

<p>Once the specific namespace is selected, we can explore everything that Kubernetes offers, like Deployments, Workloads, Secretes, and ConfigMaps. I like this because you can edit something within the YAML file, and with a single click on X, it will check, save, and Kubernetes is going to restart to the latest version. You can also schedule this change for any time that is suitable, close your browser and enjoy your day while the task is going to be scheduled in the Kubernetes.</p>

<p>Deployments --> Workload --> Pods you can simply SSH into any available container within POD, and you will be asked if there are multiple within a single Kubernetes pod.</p>

<p><img src="https://nemanja.io/content/images/2022/07/2022-07-30_11-54.png" alt="Manage Remote Kubernetes clusters with VS Code IDE editor"></p>

<p>From the same area right, click, you can do other things like grab logs, describe pod/container or watch it actively while troubleshooting start/stop into a separate window.</p>

<p>Next super feature of this extension is that you can actually "Attach" VS Code IDE server directly into any Kubernetes Pod/Container and remotely connect to it, working from your computer the same way as you do locally. I find this super useful because of many aspects, but I will repeat here few:</p>

<ul>
<li>Search is amazing, fast, and accurate</li>
<li>You can edit any file and parse them with many useful VS Code extensions like a parser, tabnine, or similar</li>
<li>Smart log extension <a href="https://marketplace.visualstudio.com/items?itemName=mbehr1.smart-log">https://marketplace.visualstudio.com/items?itemName=mbehr1.smart-log</a></li>
<li>MySQL/ElasticSearch/Redis browser/editor <a href="https://marketplace.visualstudio.com/items?itemName=cweijan.vscode-mysql-client2">https://marketplace.visualstudio.com/items?itemName=cweijan.vscode-mysql-client2</a></li>
</ul>

<p><img src="https://nemanja.io/content/images/2022/07/2022-07-30_12-04.png" alt="Manage Remote Kubernetes clusters with VS Code IDE editor"></p>

<p>You can install anything useful that will help you develop, troubleshoot/debug or test just now with the aspect that you can work from the same place and operate with the Kubernetes cluster.</p>

<p>I hope this article was helpful. Happy DevOpsing :)</p>]]></content:encoded></item><item><title><![CDATA[Use CSSNano and Postcss to Minify CSS in Magento 2]]></title><description><![CDATA[<p>Hello,</p>

<p>Today I want to show how to use another tremendous external tool to Minify Theme's CSS files, skipping Magento 2 built-in (Core) functionality.</p>

<p>Minification is taking some code and using various methods to reduce its size on disk. Unlike techniques such as gzip, which preserve the original semantics of</p>]]></description><link>https://nemanja.io/use-cssnano-and-postcss-to-minify-css-in-magento-2/</link><guid isPermaLink="false">9fb9f7ab-843d-44d6-9950-a10091cd40af</guid><category><![CDATA[css]]></category><category><![CDATA[minify]]></category><category><![CDATA[minify css]]></category><category><![CDATA[magento 2 css minify]]></category><category><![CDATA[magento css minify]]></category><category><![CDATA[cssnano postcss]]></category><category><![CDATA[cssnano]]></category><category><![CDATA[css-size]]></category><category><![CDATA[css size]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Sun, 19 Jun 2022 14:27:20 GMT</pubDate><media:content url="http://nemanja.io/content/images/2022/06/11719785413_bea9ccd83a.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://nemanja.io/content/images/2022/06/11719785413_bea9ccd83a.jpg" alt="Use CSSNano and Postcss to Minify CSS in Magento 2"><p>Hello,</p>

<p>Today I want to show how to use another tremendous external tool to Minify Theme's CSS files, skipping Magento 2 built-in (Core) functionality.</p>

<p>Minification is taking some code and using various methods to reduce its size on disk. Unlike techniques such as gzip, which preserve the original semantics of the CSS file and are therefore lossless, minification is an inherently lossy process, where values can be replaced with smaller equivalents or selectors merged, for example.</p>

<p>The final result of a minification step is that the resulting code will behave the same as the original file, but some parts will be altered to reduce the size as much as possible.</p>

<p>Combining gzip compression with minification leads to the best reduction in file size, but don't just take our word for it. </p>

<p>Vendor suggests trying out css-size, a module especially created to measure CSS size before &amp; after minification. <br>
<a href="https://npmjs.org/package/css-size">https://npmjs.org/package/css-size</a></p>

<p>cssnano tool is one such minifier, which is written in Node.js. It's a PostCSS plugin that you can add to your build process, to ensure that the resulting stylesheet is as small as possible for a production environment.</p>

<p>To get started as per vendor <a href="https://cssnano.co/docs/getting-started/">https://cssnano.co/docs/getting-started/</a> page we will install cssnano and postcss CLI tools.</p>

<pre><code>npm install --save-dev postcss cssnano cssnano-cli postcss-cli  
</code></pre>

<p>You can do it with -g for global purposes  </p>

<pre><code>npm install --save-dev postcss cssnano cssnano-cli postcss-cli -g  
</code></pre>

<p>Now that tool is installed you can type <strong>whereis</strong> command to get the path of postcss tool</p>

<pre><code>whereis postcss  
</code></pre>

<p>Once you have done this, you will need to configure cssnano by creating a <strong>postcss.config.js</strong> file in the root of your project. This should contain cssnano as well as any other plugins that you might want for your project; as an example:</p>

<pre><code>module.exports = {  
    plugins: [
        require('cssnano')({
            preset: 'default',
        }),
    ],
};
</code></pre>

<p>Read more about presets at <a href="https://cssnano.co/docs/presets">https://cssnano.co/docs/presets</a> there are tons of other options available but for Magento 2 specific I have tested with the "default" preset.</p>

<p>Next, we will get the <strong>THEME folder</strong> path scripted into the bash variable  </p>

<pre><code>THEME_FOLDER=('/workspace/magento2gitpod/pub/static/frontend/THEME/porto/en_US')  
</code></pre>

<p>Note: make sure you enter the correct path to your deployed theme folder in the pub/ directory</p>

<p>Time to push command and replace current CSS files with the minified version of them using the same file names.</p>

<pre><code>find ${THEME_FOLDER[@]} \( -name '*.css' -not -name '*.min.css' -not -name '*.json' -not -name '*.js' \) -exec /usr/local/nvm/versions/node/v16.14.0/bin/postcss --replace \{} \;  
</code></pre>

<p>Note: You see, I've used the nvm (Node Version Manager) tool and a specific v16.14.0 version. Tool vendors claim that you can use anything after version 10 safely.</p>

<p>Time to measure before/after :)</p>

<pre><code>npm install -g css-size  
</code></pre>

<p>Pick one random CSS file before running it, for example: <br>
<img src="https://nemanja.io/content/images/2022/06/2022-06-19_12-19.png" alt="Use CSSNano and Postcss to Minify CSS in Magento 2"></p>

<p>Auspicious results :) but okay, some files give more significant differences some do not. With the Advanced preset, you can remove comments, which is good in the production environment to save more space and reduce CSS sizes.</p>

<p>Here you can find detailed instructions on what CSSnano Advanced preset offers: <br>
<a href="https://cssnano.co/docs/advanced-transforms/">https://cssnano.co/docs/advanced-transforms/</a> <br>
<a href="https://cssnano.co/docs/what-are-optimisations/">https://cssnano.co/docs/what-are-optimisations/</a></p>

<p>Since the tool is pure CLI you can integrate it within any deployment CI/CD process and run it always after pushing setup:static-content:deploy command from the native Magento tool.</p>

<p>Happy optimizing!</p>]]></content:encoded></item><item><title><![CDATA[Manage remote Kubernetes clusters with kubectl and Lens]]></title><description><![CDATA[<p>Hello,</p>

<p>This article will explain how to access the Kubernetes cluster remotely using the SSH Tunnel mechanism. </p>

<p>The most popular tool to manage Kubernetes clusters from CLI is called kubectl. The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications,</p>]]></description><link>https://nemanja.io/manage-remote-kubernetes-clusters-with-kubectl-and-lens/</link><guid isPermaLink="false">b76be9d7-2d3c-43b3-92bb-a3042f623dd1</guid><category><![CDATA[manage]]></category><category><![CDATA[remote]]></category><category><![CDATA[kubernetes]]></category><category><![CDATA[manage kubernetes cluster]]></category><category><![CDATA[kubectl]]></category><category><![CDATA[lens]]></category><category><![CDATA[kubernetes lens]]></category><category><![CDATA[kubectl ssh tunnel]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Fri, 13 May 2022 09:14:29 GMT</pubDate><media:content url="http://nemanja.io/content/images/2022/05/00c71841fc9d2334f53b68a74877865c.png" medium="image"/><content:encoded><![CDATA[<img src="http://nemanja.io/content/images/2022/05/00c71841fc9d2334f53b68a74877865c.png" alt="Manage remote Kubernetes clusters with kubectl and Lens"><p>Hello,</p>

<p>This article will explain how to access the Kubernetes cluster remotely using the SSH Tunnel mechanism. </p>

<p>The most popular tool to manage Kubernetes clusters from CLI is called kubectl. The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.</p>

<p>Kubernetes.io page <a href="https://kubernetes.io/docs/tasks/tools/">https://kubernetes.io/docs/tasks/tools/</a> explains how you can install it on your desired Operating system, but today I will focus on something that I use as a Desktop, Fedora Linux.</p>

<p>Before you begin, please note that you must use a kubectl version that is within one minor version difference of your cluster. For example, a v1.24 client can communicate with v1.23, v1.24, and v1.25 control planes. Using the latest compatible version of kubectl helps avoid unforeseen issues.</p>

<p>What vendor page says <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/">https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/</a> you can simply use curl and download a specific version.</p>

<p>In my example, I've had to install the 1.15.1 version to match the Kubernetes cluster running.</p>

<pre><code>curl -LO https://dl.k8s.io/release/v1.15.1/bin/linux/amd64/kubectl  
</code></pre>

<p>Next, we can copy kubectl to /usr/sbin or /usr/bin but we can also edit ~/.bashrc and add an alias as a workaround:  </p>

<pre><code>alias kubectl='/home/nemke/kubectl'  
</code></pre>

<p>We can test the tool, it should list basic helper commands. Next, we need to create .kube directory and copy over the config file from the Kubernetes master or worker node.  </p>

<pre><code>mkdir -p ~/.kube  
</code></pre>

<p>You can use rsync,SCP, or similar or simply copy/paste the config file and leave it under ~/.kube/config path.</p>

<p>Edit the config file and adjust server: section with the following:  </p>

<pre><code>server: https://127.0.0.1:6443  
</code></pre>

<p>Under - cluster: line we need to add:  </p>

<pre><code>insecure-skip-tls-verify: true  
</code></pre>

<p>And comment certificate-authority-data: if your cluster has it.</p>

<p>Picture of config file how it should look like: <br>
<img src="https://nemanja.io/content/images/2022/05/DeepinScreenshot_select-area_20220513110022.png" alt="Manage remote Kubernetes clusters with kubectl and Lens"></p>

<p>Login to the Kubernetes master or worker node and grab the Cluster IP address from /etc/hosts file. You can use kubectl tool as well to get that info, but let's use this way.</p>

<p>Now it's time to setup SSH tunnel, this is working example:  </p>

<pre><code>ssh -f username@IP-OF-SERVER -L 6443:CLUSTER-IP:6443 -N  
</code></pre>

<p>When executed it will return back. That's it! Time to test...</p>

<p><img src="https://nemanja.io/content/images/2022/05/DeepinScreenshot_select-area_20220513110521.png" alt="Manage remote Kubernetes clusters with kubectl and Lens"></p>

<p>You are now managing Kubernetes cluster from your local environment. Good luck!</p>

<p>Let's now test to put everything under UI. I've selected Lens because I think this IDE Kubernetes editor is brilliant when you need to troubleshoot and analyzing cluster events or simply put multiple container logs and monitor them. Lens – the Kubernetes IDE — is fruit of a Mirantis-sponsored open source project. Available for Linux, Mac, and Windows, Lens gives you a powerful interface and toolkit for managing, visualizing, and interacting with multiple Kubernetes clusters, while remaining always in proper context.</p>

<p>Visit <a href="https://k8slens.dev/">https://k8slens.dev/</a>, download your desired application, and continue.</p>

<p>When you log in and browse Clusters you will see your's right there listed: <br>
<img src="https://nemanja.io/content/images/2022/05/DeepinScreenshot_select-area_20220513110523.png" alt="Manage remote Kubernetes clusters with kubectl and Lens"></p>

<p>Click on it and enjoy analyzing clusters and logging in to the nodes. You will still have the help of the terminal where kubectl runs and you can execute commands: <br>
<img src="https://nemanja.io/content/images/2022/05/DeepinScreenshot_select-area_20220513111043.png" alt="Manage remote Kubernetes clusters with kubectl and Lens"></p>

<p>Final toughts/conclusions... <br>
<img src="https://nemanja.io/content/images/2022/05/its-a-kubernetes-cluster-i-know-this-1.jpg" alt="Manage remote Kubernetes clusters with kubectl and Lens"></p>

<p>I hope this article was helpful. Using this method you can add multiple Kubernetes Clusters on your Desktop environment and manage it successfully. Good luck!</p>]]></content:encoded></item><item><title><![CDATA[L2 caching in Magento]]></title><description><![CDATA[<p>Hello my friends,</p>

<p>It's been a while since my last post. I wanted today to share what L2 caching in Magento is and what are two ways you can utilize it. </p>

<p>So what L2 caching is? <br>
In really simple words to reduce network congestion and bandwidth usage between microservices, this</p>]]></description><link>https://nemanja.io/l2-caching-in-magento/</link><guid isPermaLink="false">29384b00-1e8d-49b0-89cf-9f064eff6df6</guid><category><![CDATA[l2 cache]]></category><category><![CDATA[l2]]></category><category><![CDATA[magento 2.4 cache]]></category><category><![CDATA[optimize cache]]></category><category><![CDATA[redis cache]]></category><category><![CDATA[magento 2 redis]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Sun, 27 Mar 2022 09:15:31 GMT</pubDate><media:content url="http://nemanja.io/content/images/2022/03/cache.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://nemanja.io/content/images/2022/03/cache.jpg" alt="L2 caching in Magento"><p>Hello my friends,</p>

<p>It's been a while since my last post. I wanted today to share what L2 caching in Magento is and what are two ways you can utilize it. </p>

<p>So what L2 caching is? <br>
In really simple words to reduce network congestion and bandwidth usage between microservices, this mechanism is implemented using \Magento\Framework\Cache\Backend\RemoteSynchronizedCache class. </p>

<p>Magento stores the hashed data version in Redis, with the suffix ‘:hash’ appended to the regular key. In case of an outdated local cache, the data is transferred to the local machine with a cache adapter.</p>

<p>Magento or Devdocs gave really nice example on how to modify or replace the existing cache section in the app/etc/env.php file (<a href="https://devdocs.magento.com/guides/v2.4/config-guide/cache/two-level-cache.html">https://devdocs.magento.com/guides/v2.4/config-guide/cache/two-level-cache.html</a>), so basically it does what it does stores local backend to RAM, but what I want to share todate is that you can load balance between two Redis environments instead, let's say we can make one locally hosted and one on another node using well network.</p>

<pre><code>'cache' =&gt; [  
        'frontend' =&gt; [
            'default' =&gt; [
                'backend' =&gt; '\\Magento\\Framework\\Cache\\Backend\\RemoteSynchronizedCache',
                'backend_options' =&gt; [
                    'remote_backend' =&gt; '\\Magento\\Framework\\Cache\\Backend\\Redis',
                    'remote_backend_options' =&gt; [
                        'persistent' =&gt; 0,
                        'server' =&gt; 'redis-somewhere-in-the-cloud',
                        'database' =&gt; '1',
                        'port' =&gt; '6379',
                        'password' =&gt; '',
                        'compress_data' =&gt; '1'
                    ],
                    'local_backend' =&gt; '\\Magento\\Framework\\Cache\\Backend\\Redis',
                    'local_backend_options' =&gt; [
                        'persistent' =&gt; 0,
                        'server' =&gt; '127.0.0.1',
                        'database' =&gt; '3',
                        'port' =&gt; '6379',
                        'password' =&gt; '',
                        'compress_data' =&gt; '1'
                    ],
                    'use_stale_cache' =&gt; false
                ],
                'frontend_options' =&gt; [
                    'write_control' =&gt; false
                ],
                'id_prefix' =&gt; 'd01_'
            ],
            'page_cache' =&gt; [
                'id_prefix' =&gt; 'd01_',
                'backend' =&gt; 'Magento\\Framework\\Cache\\Backend\\Redis',
                'backend_options' =&gt; [
                    'server' =&gt; '127.0.0.1',
                    'database' =&gt; '2',
                    'port' =&gt; '6379',
                    'password' =&gt; '',
                    'compress_data' =&gt; '0',
                    'compression_lib' =&gt; ''
                ]
            ]
        ],
        'allow_parallel_generation' =&gt; false
    ],
</code></pre>

<p>In my example using <a href="https://github.com/nemke82/magento2gitpod">https://github.com/nemke82/magento2gitpod</a> I've made a tests and indeed data is spread between two services.</p>

<p>Keyspace <br>
db0:keys=1,expires=1,avg<em>ttl=11052265 <br>
<strong>db1:keys=148,expires=42,avg</strong></em>ttl=786241446 &lt;-- 1st Redis
<strong>db3:keys=62,expires=21,avg_ttl=821121720</strong> &lt;-- Failover Redis environment <br>
gitpod /workspace/magento2gitpod $ redis-cli info</p>

<p>Magento recommends using Redis for remote caching (\Magento\Framework\Cache\Backend\Redis) and Cm<em>Cache</em>Backend<em>File for the local caching of data in shared memory, using 'local</em>backend<em>options' => ['cache</em>dir' => '/dev/shm/'], but as you can see we can use Redis remote or locally hosted instead and bring other benefits from it.</p>

<p>One of them is: <br>
<a href="https://devdocs.magento.com/guides/v2.4/config-guide/redis/redis-pg-cache.html#redis-preload-feature">https://devdocs.magento.com/guides/v2.4/config-guide/redis/redis-pg-cache.html#redis-preload-feature</a></p>

<p>and</p>

<p><a href="https://devdocs.magento.com/guides/v2.4/config-guide/cache/two-level-cache.html#stale-cache-options">https://devdocs.magento.com/guides/v2.4/config-guide/cache/two-level-cache.html#stale-cache-options</a></p>

<p>Hope this article helps. Good luck optimizing!</p>]]></content:encoded></item><item><title><![CDATA[Use Terser to minify all JavaScript assets in Magento 2]]></title><description><![CDATA[<p>Hello,</p>

<p>This is probably my last blog post in 2021. This year was really dynamic and interesting. While I was checking the Performance and SEO score for one of the clients hosted I noticed that Google is having new interesting recommendations for minifying all Javascript assets.</p>

<p><img src="https://nemanja.io/content/images/2021/12/DeepinScreenshot_select-area_20211227222024.png" alt="alt"></p>

<p>Terser is the JavaScript</p>]]></description><link>https://nemanja.io/use-terser-to-minify-all-javascript-assets/</link><guid isPermaLink="false">9a306e97-c855-432c-9fda-26bd9c47a9fb</guid><category><![CDATA[terser]]></category><category><![CDATA[minify javascript]]></category><category><![CDATA[magento 2 javascript]]></category><category><![CDATA[magento 2 javascript minify]]></category><dc:creator><![CDATA[Nemanja Djuric]]></dc:creator><pubDate>Mon, 27 Dec 2021 22:02:58 GMT</pubDate><media:content url="http://nemanja.io/content/images/2021/12/68747470733a2f2f7465727365722e6f72672f696d672f7465727365722d62616e6e65722d6c6f676f2e706e67.png" medium="image"/><content:encoded><![CDATA[<img src="http://nemanja.io/content/images/2021/12/68747470733a2f2f7465727365722e6f72672f696d672f7465727365722d62616e6e65722d6c6f676f2e706e67.png" alt="Use Terser to minify all JavaScript assets in Magento 2"><p>Hello,</p>

<p>This is probably my last blog post in 2021. This year was really dynamic and interesting. While I was checking the Performance and SEO score for one of the clients hosted I noticed that Google is having new interesting recommendations for minifying all Javascript assets.</p>

<p><img src="https://nemanja.io/content/images/2021/12/DeepinScreenshot_select-area_20211227222024.png" alt="Use Terser to minify all JavaScript assets in Magento 2"></p>

<p>Terser is the JavaScript minifier. It processes JavaScript files as well as the compiled output from other languages like CoffeeScript and TypeScript and transpilers like Babel.</p>

<p><a href="https://github.com/terser/terser">https://github.com/terser/terser</a></p>

<p>First, make sure you have installed the latest version of node.js (You may need to restart your computer after this step).</p>

<p>From NPM for use as a command-line app:  </p>

<pre><code>npm install terser -g  
</code></pre>

<p>Output:  </p>

<pre><code>$ node -v
v17.3.0

$ npm install terser -g

added 6 packages, and audited 7 packages in 687ms

found 0 vulnerabilities

$ whereis terser
terser: /usr/local/nvm/versions/node/v17.3.0/bin/terser  
</code></pre>

<p>Usage is really simple: <br>
<a href="https://github.com/terser/terser#command-line-usage">https://github.com/terser/terser#command-line-usage</a></p>

<p>Terser can take multiple input files. It's recommended that you pass the input files first, then pass the options. Terser will parse input files in sequence and apply any compression options. The files are parsed in the same global scope, that is, a reference from a file to some variable/function declared in another file will be matched properly.</p>

<p>If no input file is specified, Terser will read from STDIN.</p>

<p>If you wish to pass your options before the input files, separate the two with a double dash to prevent input files from being used as option arguments:  </p>

<pre><code>terser --compress --mangle -- input.js  
</code></pre>

<p>In Magento 2 specifically, we can implement the following after setup:static-content:deploy (Static asset) deployment is processed, but before that, we need to know the path of our Theme directory.</p>

<p>For example, let's try now to bundle the default Luma theme on a specific path for en_US language:  </p>

<pre><code>THEME_FOLDER=('/workspace/magento2gitpod/pub/static/frontend/Magento/luma/en_US')  
</code></pre>

<p><img src="https://nemanja.io/content/images/2021/12/terser1.png" alt="Use Terser to minify all JavaScript assets in Magento 2"></p>

<p>Now, let's bundle files with the following line:  </p>

<pre><code>find ${THEME_FOLDER[@]} \( -name '*.js' -not -name '*.min.js' -not -name 'requirejs-bundle-config.js' \) -exec /usr/local/nvm/versions/node/v17.3.0/bin/terser \{} -c -m reserved=['$','jQuery','define','require','exports'] -o \{} \;  
</code></pre>

<p>Note: we skipped some of the patterns like *.min.js and requirejs-bundle-config.js, but if you are using MagePack or any other popular Bundling tool try to skip their Core bundle files generated. For example, for Baler tool:</p>

<p>-not -name 'core-bundle.js' </p>

<p>You can test and Minify them as well, but my advice is to test this before adding this to your Production environment.</p>

<p><img src="https://nemanja.io/content/images/2021/12/terser2.png" alt="Use Terser to minify all JavaScript assets in Magento 2"></p>

<p>The last step is to clear Magento 2 cache and Redis if used and test.</p>

<p>Run audit test again, let's say on <a href="https://web.dev/measure/">https://web.dev/measure/</a> website and compare results before/after</p>

<p><img src="https://nemanja.io/content/images/2021/12/terser3.png" alt="Use Terser to minify all JavaScript assets in Magento 2"></p>

<p>Disable built-in Minify mechanism for Javascript minify and integrate this tool and how to convert your Theme into your Deployment process and enjoy the performance boost.</p>

<p>\o/ Hope this was helpful.</p>]]></content:encoded></item></channel></rss>