<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Appwrite on Google Cloud]]></title><description><![CDATA[LFX Mentee'23 @ CNCF Harbor | Postman Student Leader | NodeJS-Golang Backend + DevOps | Web3 Enthusiast | Learning Rust]]></description><link>https://blog.wilfredalmeida.com</link><generator>RSS for Node</generator><lastBuildDate>Fri, 10 Apr 2026 12:15:02 GMT</lastBuildDate><atom:link href="https://blog.wilfredalmeida.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Q4 2023: Learnings Summarized]]></title><description><![CDATA[I joined Underdog Protocol as a developer in October 2023, after attending the Solana HackerHouses in Bangalore and Mumbai. Before that, I had worked on a contract basis and as a mentee under the LFX Mentorship throughout the year.
This blog is about...]]></description><link>https://blog.wilfredalmeida.com/q4-2023</link><guid isPermaLink="true">https://blog.wilfredalmeida.com/q4-2023</guid><category><![CDATA[learning]]></category><category><![CDATA[Self Improvement ]]></category><category><![CDATA[Solana]]></category><category><![CDATA[underdog protocol]]></category><category><![CDATA[jobs]]></category><dc:creator><![CDATA[Wilfred Almeida]]></dc:creator><pubDate>Sun, 31 Dec 2023 06:07:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704002377751/eddef652-2ee4-4222-8453-660673cc9bb3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I joined <a target="_blank" href="https://underdogprotocol.com/">Underdog Protocol</a> as a developer in October 2023, after attending the Solana HackerHouses in Bangalore and Mumbai. Before that, I had worked on a contract basis and as a mentee under the LFX Mentorship throughout the year.</p>
<p>This blog is about how my thought process changed in the 3 months of starting my job. It's an unstructured outpour of my thoughts.</p>
<h2 id="heading-build-for-enterprises">Build for Enterprises</h2>
<p>I've built projects aimed at general users and developers. My approach to any problem would initiate a thought process with "How can this benefit/work for me?" in an individual capacity, for example, billing, I'd prefer project-based billing. My thought process would be "I'll pay for the specific projects I want and that's it". Supabase switched to org-based billing this year and I was annoyed at it because again, I'll pay for the projects I want. Underdog had org-based billing when I joined.</p>
<p>There was a bigger picture to this that I had yet to see. Org-based billing is preferred by enterprise and institutional users where there's more than one developer. It helps them have a single abstraction on their usage and billing I assume there are more things as well. The service provider can also structure their offered services to allow n number of projects under a plan for an org, team usage, and much more.</p>
<p>Rather than catering to individual devs, build for enterprises. Think of what you'd prefer if you were at an enterprise level looking to use a service. If you cannot/fall short, talk to someone who has already built, is building, or at an enterprise user.</p>
<h2 id="heading-learn-testing">Learn Testing</h2>
<p>I cannot stress how important testing is. Not giving it enough emphasis during my college days is a regret. At Underdog, every function, service, and module has a test. The code is thoroughly unit-tested, reviewed, and then deployed. When I started working on the codebase initially, it was challenging for me to navigate the codebase and find and work on things. I opened the code and rammed my head straight through trying to understand it and work on it, expecting comments/docs to understand <em>what the code does</em>. I didn't bother looking at the tests.</p>
<p>But hey, there are unit tests to test <em>what the code does.</em> So, I could've just looked at the tests to figure out what the code did. I started by looking at what gets triggered when an API call happens to our services and working my way downward to find its core logic and work on it.</p>
<p>I managed to implement the first functionality assigned to me this way and made a PR. I was happy and confident my code worked and it'll get merged easily. The first question asked during my PR review was "Where's the test?". I had to write a test for the functionality I had added.</p>
<p>I had never written tests for my projects in the past, guilty as charged, it had bit me back. I opened the tests, understood them, worked my way through, and wrote tests for my work.</p>
<p>Had I focused on tests earlier, it'd have been easier for me to work and my colleagues to review. If you're reading this, as painful as it might be, write those damm tests.</p>
<h2 id="heading-dont-reinvent-ux-study-others">Don't Reinvent UX, Study Others</h2>
<p>While working on <a target="_blank" href="https://passport.underdogprotocol.com/">Passport</a>, which is a service to facilitate wallet-less NFTs, we were working on adding email UI/UX to it. I had the liberty to design the UI, so as a dev, I did what was easy and fast at the beginning. As things progressed, I realized that the intended users for this service are primarily non-devs, with a mental model of email services.</p>
<p>The UX I had in place did the job, however, it didn't quite fit the mental model of an email UX the users will have in mind. I took some feedback from the non-dev friends to confirm this. Now I had to re-engineer the UX. This time, I opened popular email services studied their UX, and implemented a UX that'll maintain the mental model users have when they hear email.</p>
<h2 id="heading-understand-the-problem-at-hand">Understand the Problem at Hand</h2>
<p>Understand the problem at hand before jumping into solutions. I'm guilty of this, and it's a habit of mine where I partially understand a problem, jump into writing code, take a step back when a roadblock is hit, and understand the actual problem at then solve it.</p>
<p>While understanding the problem, I simultaneously start thinking about possible solutions and get the excitement to start implementing, however, lately, this dopamine rush has been trouble causing.</p>
<p>A simple strategy I've devised to tackle this is to:<br />- write the problem<br />- try to explain it to a 5-year-old</p>
<p>This has helped me calm my nerves and focus better.</p>
<h2 id="heading-dont-overengineer">Don't Overengineer</h2>
<p>In the past, for my projects, I've designed to handle a scale of millions but failed to get even 50 users for my app. This repetitive pattern, when implemented in a complex codebase, causes complex overengineering with multiple levels of abstraction.</p>
<p>I'm not saying building for scale is bad, I love thinking about how systems and code are built &amp; maintained at a big scale. But complex doesn't mean scalable and maintainable.</p>
<p>The solution I found to this was:<br />- Learning about clean code principles<br />- Discussing my thoughts and solutions with peers and getting feedback</p>
<h2 id="heading-dont-overdocument-rewrite">Don't Overdocument, Rewrite</h2>
<p>This <a target="_blank" href="https://github.com/WilfredAlmeida/DataStructures-Code/blob/master/Data%20Structures/Binary%20Tree/AVL%20Tree.c">AVL Tree</a> code I wrote in 2020, has comment explanations for everything. I religiously comment and document all of my code till date and expect a similar level of documentation. It frustrates me when code doesn't have comments for everything to spoonfed me. This has also been a barrier for me to work on open-source codebases.</p>
<p>Why is the code overdocumented? Because it's not readable.<br />Why is it not readable? Because I'm not aware of the practices toward it.<br />What's the solution to this? Learn how to write clean code</p>
<p>So now I'm reading a book on how to write clean code. The book hit me like a truck. I make all of the rookie mistakes mentioned in it. Comments are great wherever they're helpful, but they're not compensation for bad code.</p>
<p>I now try to write code that follows a good set of clean code practices and refactor until I'm satisfied with the code or frustrated enough to not give a damm about it.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The year has been full of learning. These are the most recent and I feel will be impactful. I'm grateful for the team at Underdog Protocol for my immense growth in the past 3 months.</p>
<p>This blog aims to be a checkpoint as I learn, upskill, and progress ahead. When will the next version be out you wonder? When I have enough points to span in a blog.</p>
<p>Hit me up on <a target="_blank" href="https://twitter.com/WilfredAlmeida_">Twitter/X</a> to talk about anything tech.</p>
]]></content:encoded></item><item><title><![CDATA[MSOL Solana SNS]]></title><description><![CDATA[MSOL is a Short Name Service (SNS) for Solana that gives you an easy-to-remember alias of your choice for your Solana Public Key.
MSOL also gives you an NFT for the SNS you create as proof that you own it.
An SNS can be up to 20 characters long and c...]]></description><link>https://blog.wilfredalmeida.com/msol-solana-sns</link><guid isPermaLink="true">https://blog.wilfredalmeida.com/msol-solana-sns</guid><category><![CDATA[Solana]]></category><category><![CDATA[WeMakeDevs]]></category><category><![CDATA[Vercel]]></category><dc:creator><![CDATA[Wilfred Almeida]]></dc:creator><pubDate>Mon, 14 Aug 2023 08:31:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1692001151543/1abde358-5eaa-4293-a1b7-336115deb7ac.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://msol-sns.vercel.app/">MSOL</a> is a Short Name Service (SNS) for Solana that gives you an easy-to-remember alias of your choice for your Solana Public Key.</p>
<p>MSOL also gives you an NFT for the SNS you create as proof that you own it.</p>
<p>An SNS can be up to 20 characters long and can have letters, numbers, and hyphens. For eg. <code>foo-bar</code></p>
<p>MSOL uses <a target="_blank" href="https://vercel.com/docs/storage/vercel-kv">Vercel KV</a> Redis database to store the SNS making it fast with average SNS lookup times of ~250ms.</p>
<p>MSOL provides Web &amp; API lookup for your SNS for easy sharing &amp; development</p>
<h2 id="heading-demo">Demo</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692000774917/833f26a3-1e1b-4c1f-a6c5-aa5defd0db6b.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-public-key-lookup">Public Key Lookup</h2>
<p>There are 2 ways to lookup your key:</p>
<ol>
<li><p>The web portal: In your browser, visiting <code>&lt;base-url&gt;/&lt;sns&gt;</code> will show you on the website whether the SNS exists or not.</p>
</li>
<li><p>Via API: To get the API key via an API call, hit the <code>&lt;base-url&gt;/api/sns?sns=&lt;your-sns&gt;</code>. For eg.</p>
</li>
</ol>
<pre><code class="lang-javascript">curl --location <span class="hljs-string">'http://localhost:5173/api/sns?sns=foo-bar'</span>
</code></pre>
<p>Sample Response</p>
<ol>
<li>SNS Found</li>
</ol>
<pre><code class="lang-json">{
    <span class="hljs-attr">"status"</span>: <span class="hljs-string">"MSOL_SNS_FOUND"</span>,
    <span class="hljs-attr">"data"</span>: [
        {
            <span class="hljs-attr">"publicKey"</span>: <span class="hljs-string">"&lt;public-key&gt;"</span>,
            <span class="hljs-attr">"sns"</span>: <span class="hljs-string">"foo-bar"</span>
        }
    ],
    <span class="hljs-attr">"error"</span>: <span class="hljs-literal">null</span>
}
</code></pre>
<ol>
<li>SNS Not Found</li>
</ol>
<pre><code class="lang-json">{
    <span class="hljs-attr">"status"</span>: <span class="hljs-string">"MSOL_SNS_NOT_FOUND"</span>,
    <span class="hljs-attr">"data"</span>: <span class="hljs-literal">null</span>,
    <span class="hljs-attr">"error"</span>: <span class="hljs-literal">null</span>
}
</code></pre>
<h2 id="heading-supporting-msol">Supporting MSOL</h2>
<p>MSOL aims to take away the hassle of remembering your Solana Public Key.</p>
<p>MSOL aims to always remain free for its users.</p>
<p>Due to a lack of financial resources, MSOL currently supports only Solana Devnet.</p>
<p>Please get in touch if you want to support MSOL.</p>
<p>Costs to be covered for now are:</p>
<ul>
<li><p>Vercel for hosting &amp; KV DB</p>
</li>
<li><p>Underdog for NFT</p>
</li>
<li><p>Images for NFTs</p>
</li>
<li><p>Domain</p>
</li>
</ul>
<h2 id="heading-end-note">End Note</h2>
<p>Check out MSOL and create your SNS and <a target="_blank" href="https://twitter.com/MsolSns">tweet</a> about it.</p>
<p>MSOL is created by <a target="_blank" href="https://twitter.com/WilfredAlmeida_">Wilfred Almeida</a>, get in touch.</p>
]]></content:encoded></item><item><title><![CDATA[DDNS for Business Production Workloads: Should You?]]></title><description><![CDATA[I've been working with AtomicAsher LLP, a bootstrapped company to get their technical flows decided and set up in order.
The internet connection we had was a non-business connection under the founder Anirudh's account. We decided to upgrade it to the...]]></description><link>https://blog.wilfredalmeida.com/ddns-for-business</link><guid isPermaLink="true">https://blog.wilfredalmeida.com/ddns-for-business</guid><category><![CDATA[dns]]></category><category><![CDATA[production]]></category><category><![CDATA[server]]></category><category><![CDATA[hosting]]></category><category><![CDATA[nginx]]></category><dc:creator><![CDATA[Wilfred Almeida]]></dc:creator><pubDate>Thu, 06 Jul 2023 05:14:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1688620161280/64923c79-5a60-4abc-8ea1-c1f6b5f4ecdc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I've been working with <a target="_blank" href="https://atomicasher.com/">AtomicAsher LLP</a>, a bootstrapped company to get their technical flows decided and set up in order.</p>
<p>The internet connection we had was a non-business connection under the founder <a target="_blank" href="https://www.linkedin.com/in/anirudhasher">Anirudh's</a> account. We decided to upgrade it to the Airtel Business Broadband Plan. We decided to proceed with the INR 799/m plan which satisfies our usage. Upon doing the formalities and Airtel's personnel visiting the office, we received an email about the service in which it was mentioned that we'll get a free static IP for our connection.</p>
<p>Most of the company services are in the Minimum Viable Product (MVP) stage as of this writing, where spending money on cloud deployment solutions didn't make much sense. Since Airtel's email promised us a free static IP, we decided to set up an in-house server for our deployments, Linux based development environment, and centralized file hosting. We purchased the hardware and the connection was upgraded.</p>
<hr />
<h3 id="heading-airtels-misleading-email">Airtel's Misleading Email</h3>
<p>When we tried to redeem the free static IP from the Airtel dashboard, it bummed us that for our plan the free IP is not available. To get a static IP we had two options from Airtel:</p>
<ol>
<li><p>Upgrade the broadband plan</p>
</li>
<li><p>Pay them an additional INR 99/m for the static IP</p>
</li>
</ol>
<p>Our communication attempts with Airtel were in vain with no solid accountability for the email from their side.</p>
<p>We had planned the architecture of our systems, invested in the hardware, and most importantly, spending money on the cloud for trial and testing didn't make sense.</p>
<h1 id="heading-enter-ddns">Enter DDNS</h1>
<p>Upon exploring our options, we came across <a target="_blank" href="https://en.wikipedia.org/wiki/Dynamic_DNS">Dynamic Domain Name System (DDNS)</a>.</p>
<blockquote>
<p><strong>Dynamic DNS</strong> (<strong>DDNS</strong>) is a method of automatically updating a <a target="_blank" href="https://en.wikipedia.org/wiki/Name_server">name server</a> in the <a target="_blank" href="https://en.wikipedia.org/wiki/Domain_Name_System">Domain Name System</a> (DNS), often in real time, with the active DDNS configuration of its configured hostnames, addresses or other information.</p>
<p>~ Wikipedia</p>
</blockquote>
<p>Since it seemed to eliminate our need for a static IP, we decided to go ahead with it and not purchase a static IP.</p>
<h2 id="heading-our-network-architecture">Our Network Architecture</h2>
<p>Since Airtel's router/access point didn't offer many features and control, we decided to use a <a target="_blank" href="https://store-ui.in/products/unifi-dream-machine">Ubiquiti Dream Machine (UDM)</a> lying around in our hardware treasury as our primary network controller. We disabled the <a target="_blank" href="https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol">DHCP</a> of the Airtel device and turn off its wifi and use it only as a medium to interact with the fiber optic network cable.</p>
<p>To our surprise, Airtel has locked down the firmware so much that we couldn't disable the wifi from the dashboard ourselves.</p>
<p>We gave a WAN static IP to the UDM and made it our primary network controller.</p>
<h3 id="heading-airtels-unsolicited-control">Airtel's unsolicited control</h3>
<p>We asked Airtel about turning off the wifi and Airtel turned off the wifi signal of the physical device <strong>IN OUR OFFICE</strong> from <strong>THEIR SIDE</strong> via their control panel. This immediately made us question the access and control Airtel has over us and raised privacy concerns. But without much choice, we kept it aside and moved on.</p>
<h3 id="heading-airtel-and-ddns-providers">Airtel and DDNS Providers</h3>
<p>From our research, <a target="_blank" href="https://noip.com/">no-ip</a> seemed the most lucrative provider for free and reliable DDNS. But to our surprise, Airtel doesn't support no-ip as a provider in its dashboard. The available providers were either:</p>
<ol>
<li><p>Paid and expensive</p>
</li>
<li><p>Their sites were 404</p>
</li>
</ol>
<p>Stunned by this behavior, we reached out to their support who gave us vague responses like "your query has been forwarded to the respective team" for 2 days. The same individuals who turned off our wifi swiftly needed a team to respond to no-ip as a DDNS provider.</p>
<blockquote>
<p>Personally, I find this hard to digest🤔</p>
</blockquote>
<hr />
<h3 id="heading-finally-ddns">Finally DDNS</h3>
<p>With the UDM supporting no-ip as a DDNS provider, we signed up on no-ip and tried to set up our DDNS. With some initial issues with IP updates and other things, we managed to get DDNS working.</p>
<p>When we hit our DDNS link, the Airtel router's management dashboard page is typically visible at <code>192.168.1.1</code> loaded. Surprisingly Airtel had port 80 open for us or didn't block the traffic yet or whatever, we had some progress.</p>
<h3 id="heading-port-forwarding">Port Forwarding</h3>
<p>We expected traffic to flow like this:</p>
<p><code>hit to our ddns URL -&gt; port of Airtel -&gt; Port of UDM -&gt; Port of Server</code></p>
<p>But this isn't how it was happening. After quite some time spent playing around with different port forwarding and firewall config, we discovered that there was no port forward needed in the Airtel device. Adding firewall rules in the UDM did the trick and finally traffic hit our server 🥳</p>
<h3 id="heading-accessing-from-our-wifi">Accessing from Our WiFi</h3>
<p>We hosted a sample application on a port to test out the DDNS working. Trying to access them while connected to the in-office wifi didn't work. Accessing from a different internet connection somehow worked.</p>
<p>Then we discovered that usually with DDNS this happens and the solution to this is more work.</p>
<blockquote>
<p>Checkout this <a target="_blank" href="https://superuser.com/questions/1197794/cant-access-ddns-hostname-from-my-home-network">SuperUser</a> thread to learn more about this.</p>
</blockquote>
<p>The solution to this as far we understood was to make a DNS entry in our local DNS to resolve the DDNS URL. We planned on setting up <a target="_blank" href="https://pi-hole.net/">Pi-hole</a> for ad-blocking and local DNS.</p>
<h3 id="heading-ddns-performance">DDNS Performance</h3>
<p>For a base API of hello world, without any DNS or any other caching, it took less than <code>100ms</code> in response time. So the DDNS setup for us was fast enough.</p>
<hr />
<h1 id="heading-enter-nginx">Enter NGINX</h1>
<p><a target="_blank" href="https://www.nginx.com/">NGINX</a> is a web server that can also be used as a reverse proxy. We set up NGINX on our server. As mentioned earlier, on our port 80, Airtel's login page was loading. When we tried to access it after some days when our NGINX was set up, we were not surprised to see that it didn't load. A possible reason for that we believe is Airtel blocking the port 80.</p>
<p>We changed NGINX to run on a different port and we were able to access it via <code>OurUrl:port</code></p>
<p>We could set up a port 80 redirect in no-ip wherein it would redirect the traffic for port 80 to some other port. But, that was not free and we needed to upgrade to premium for that. And the base premium rate as of 30th June 2023 was $1.99/m.</p>
<h3 id="heading-ssl-andamp-domain-issues">SSL &amp; Domain Issues</h3>
<p>We needed SSL. SSL from no-ip would cost us. We also needed a subdomain for our server under our main domain. Doing these things was complicated, tiring, and time-consuming.</p>
<hr />
<h1 id="heading-the-realization">The Realization</h1>
<p>We went with DDNS to save the INR 99/m, but we were now at a point where it was costing us billed time and effort to get it all up. We ultimately gave up and got a static IP from Airtel. Following is our reasoning:</p>
<ol>
<li><p>Is it worth it?</p>
<p> As a business, saving every penny you can is important. But if this little saving is costing in terms of time and effort, then is it worth it?</p>
</li>
<li><p>Figuring it all out</p>
<p> No one in-house had the practical experience of dealing with DDNS and the things discussed here. It was a figuring-out journey where we had to sit and look things up and try it and get it working. This costs time.</p>
</li>
<li><p>People working on it</p>
<p> With the size of the organization, we had only one big brain (me ofc) working on it all. Only that one person knew what all things were done to get it all working. So the documentation, knowledge transfer, and debugging would all depend on the person, whose talent is better utilized in other important aspects.</p>
</li>
<li><p>Delays Introduced</p>
<p> For in-house machine learning workloads, we planned to put it on a VM on the server hardware with GPU for efficient and multiuser workloads. Also, to allow access to company and in-office resources for the hybrid workforce, we needed a firewall and VPN config. All of this depended on our server which got delayed.</p>
<p> Apart from this, ready-to-be-deployed applications got delayed which needed to be showcased to investors and clients.</p>
</li>
<li><p>Why not Cloud VM?</p>
<p> Cloud VMs cost money. Given that we don't have production load and revenue yet, it doesn't make sense to invest money into those. Also, we have machine learning workloads that need GPUs. Cloud GPUs are quite expensive. We'd rather invest in our GPUs and in case they become dry with a lack of ML workloads, play <a target="_blank" href="https://blog.counter-strike.net/">CS: GO</a> on them.</p>
</li>
</ol>
<hr />
<h1 id="heading-conclusion">Conclusion</h1>
<p>We finally got a static IP from Airtel and set up our systems. The final two cents are that it's fine to try out things up to a certain degree but know when to stop. Overall being in the discovery and MVP phase, we got our learnings, got to know about some new services out there and expand our horizons. But that might not be the case with you, so think.</p>
<p>If you're thinking about DDNS, don't just think about the cost of the static IP, think about the people time your workforce spent on it, because you pay them, it's costing you anyways.</p>
<p>Check out <a target="_blank" href="https://atomicasher.com/">Atomic Asher</a> and get in touch. We might be hiring, check our <a target="_blank" href="https://www.linkedin.com/company/atomic-asher-llp/jobs/">jobs page</a>.</p>
<p>This blog is written by <a target="_blank" href="https://wilfredalmeida.com/">Wilfred Almeda</a>, check out his other <a target="_blank" href="https://blog.wilfredalmeida.com/">blogs</a> too.</p>
<blockquote>
<p>Disclaimer:</p>
<p>These are my thoughts, experiences and opinions. If you disagree with my points above, reach out to me and let's have a discussioin.</p>
<p>These thoughts do not reflect the opinions of Atomic Asher LLP.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Rusly: Rust URL Shortener System Design]]></title><description><![CDATA[Rusly is a URL shortener built using the Rocket framework in Rust.
Checkout RuslyCheckout the Rusly GitHub repo
This blog lays out the system design thought process I used while designing and developing the system.
Read about the Rust syntactical dev...]]></description><link>https://blog.wilfredalmeida.com/rusly</link><guid isPermaLink="true">https://blog.wilfredalmeida.com/rusly</guid><category><![CDATA[Rust]]></category><category><![CDATA[rust lang]]></category><category><![CDATA[Url Shortener]]></category><category><![CDATA[WeMakeDevs]]></category><category><![CDATA[Rocket]]></category><dc:creator><![CDATA[Wilfred Almeida]]></dc:creator><pubDate>Mon, 17 Apr 2023 20:30:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1681761418248/64c4cd88-6739-4b24-b0d6-176b55c639a8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Rusly is a URL shortener built using the Rocket framework in Rust.</p>
<p>Checkout <a target="_blank" href="https://wilfredalmeida.github.io/rusly-ui/">Rusly</a><br />Checkout the <a target="_blank" href="https://github.com/WilfredAlmeida/rusly">Rusly GitHub repo</a></p>
<p>This blog lays out the system design thought process I used while designing and developing the system.</p>
<p>Read about the Rust syntactical development details <a target="_blank" href="https://github.com/WilfredAlmeida/rusly">here</a>.</p>
<hr />
<h1 id="heading-why-rust">Why Rust?</h1>
<p>I decided to use Rust due to the following reasons:</p>
<ol>
<li><p>I wanted to implement my learning and make a project</p>
</li>
<li><p>I wanted to implement my system design knowledge</p>
</li>
<li><p>Rust promises features like fearless concurrency and memory safety. I wanted to see their viability when implementing production-grade systems.</p>
</li>
</ol>
<hr />
<h1 id="heading-api-endpoints">API Endpoints</h1>
<p>Let's take a quick look at the available endpoints</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Route</td><td>Description</td><td>Request Body</td><td>Response</td></tr>
</thead>
<tbody>
<tr>
<td><code>/</code> GET</td><td>The default home route</td><td><em>None</em></td><td>Hello Heckerr</td></tr>
<tr>
<td><code>/v1/shorten</code> POST</td><td>Takes in the URL to shorten and returns the shortened URL.</td><td>- <code>url_to_shorten</code>: String url to shorten</td><td>- <code>shortened_url</code>: A shortened string URL.</td></tr>
<tr>
<td></td><td></td><td>- <code>custom_link</code>: Optional, strictly 7-character alphabetic custom shortened URL string</td><td>- <code>error</code>: Error message string</td></tr>
<tr>
<td><code>/&lt;short-url</code> GET</td><td>Permanently redirects to the specified short URL string</td><td><em>None</em></td><td><em>None</em></td></tr>
</tbody>
</table>
</div><h1 id="heading-why-only-2-request-params-for-shorten">Why only 2 request params for <code>/shorten</code>?</h1>
<p>I studied the UX of some popular URL shortener sites and decided to stick to the core purpose of just letting the user shorten the URL along with a custom URL for convenience.</p>
<h2 id="heading-what-happens-when-a-request-to-shorten-a-url-hits">What happens when a request to shorten a URL hits?</h2>
<ol>
<li><p>The URL validity is checked</p>
</li>
<li><p>If <code>custom_url</code> param is passed, its validity is checked</p>
</li>
<li><p>A random string of 7 alphabetic characters is generated</p>
</li>
<li><p>The string generated in step 3, the URL to shorten is stored in the database along with the URL to shorten and a UNIX timestamp</p>
</li>
</ol>
<hr />
<h1 id="heading-database">Database</h1>
<p>Currently, the SQLite database is used as an embedded database.</p>
<h2 id="heading-database-schema">Database Schema</h2>
<table><tbody><tr><td><p><code>id</code></p></td><td><p>VARCHAR(7) PRIMARY KEY</p></td></tr><tr><td><p><code>fullUrl</code></p></td><td><p>VARCHAR(1024) NOT NULL</p></td></tr><tr><td><p><code>timestamp</code></p></td><td><p>INTEGER NOT NULL</p></td></tr></tbody></table>

<h3 id="heading-schema-description">Schema Description</h3>
<p><code>id</code>: The randomly generated string acts as the shortened URL string. This being a primary key ensures that there are no duplicate short URLs.</p>
<p>In the case of custom short URLs, it won't be allowed either.</p>
<p>Since the length of the short URL is 7 characters, there are over 8 billion possible combinations that are sufficient and uniqueness is maintained by the primary key.</p>
<p><code>fullUrl</code>: The full URL string. Is fetched and the user is redirected permanently.</p>
<p><code>timestamp</code>: The UNIX timestamp of entry. It is provided in the Rust code in the insert query. The commit time of the record and this timestamp may vary.</p>
<hr />
<h2 id="heading-why-not-store-the-full-short-url">Why not store the full short URL?</h2>
<p>To save storage space, only the short string is stored.</p>
<p>The host URL may change or vary and it'll cause complexities then.</p>
<p>Additional database update operations and compute will be required to update the fully stored short URL.</p>
<p>If this operation fails midway, data inconsistencies will arise.</p>
<p>By only storing the short URL string, a change in the host URL can be handled.</p>
<p>If the service is modified in the future, existing URLs can still be inculcated</p>
<h2 id="heading-why-sqlite">Why SQLite?</h2>
<p>SQLite is an excellent embedded database and considering the minimal database schema with only one table, it is a great choice.</p>
<p>It takes away the need for managing a dedicated database server. Which saves cost.</p>
<p>The following text is from the docs on <a target="_blank" href="https://www.sqlite.org/whentouse.html">when to use SQLite</a></p>
<blockquote>
<p>Generally speaking, any site that gets fewer than 100K hits/day should work fine with SQLite. The 100K hits/day figure is a conservative estimate, not a hard upper bound. SQLite has been demonstrated to work with 10 times that amount of traffic.</p>
</blockquote>
<p>SQLite will handle the load</p>
<p>Learn more about <a target="_blank" href="https://www.sqlite.org/speed.html">SQLite performance here</a>.</p>
<h2 id="heading-scaling-sqlite">Scaling SQLite</h2>
<h3 id="heading-potential-issues-with-scaling">Potential Issues with Scaling</h3>
<p>Replicated scaling of SQLite seems complex to me and can cause data inconsistencies and data synchronization issues.</p>
<p>If SQLite is set up in a replicated environment then each replica will have its own database and data copies.</p>
<p>If a <code>/shorten</code> write request is handled by replica <code>A</code> and subsequent read request for that data goes to the replica <code>B</code> then the user will face inconsistency issues, or the requests need to be served by the same replica which can overload a single replica.</p>
<h3 id="heading-potential-solution-for-scaling">Potential Solution for Scaling</h3>
<p>If SQLite needs to be scaled then one possible approach is that the database file can be put on shared network storage and mounted in the replicas and the backend rust replicas can access it over the network.</p>
<p>This will increase response time as the file needs to be accessed over a network.</p>
<p>Each replica can have its own cache to avoid network database read calls.</p>
<p>But again, the caching database is yet another additional complex component to manage.</p>
<hr />
<h1 id="heading-understanding-the-random-short-url-string">Understanding the random short URL string</h1>
<p>The random short URL is a 7 lowercase characters alphabetic string.</p>
<p>Before this, I considered ULIDs and UUIDs to be used as short url strings but didn't proceed due to the following factors.</p>
<p>Both ULID and UUID are 128-bit long, the generated full-length string cannot be used as a short URL. If a substring is used then the computation spent on generating these full-length strings is wasted.</p>
<p>Both UUID and ULID strings consist of alphanumeric characters which didn't seem ideal for use as a short URL.</p>
<p>Personally, I feel there's nothing wrong with using an alphanumeric short string, but upon examining popular URL shorteners, I noticed that they don't use alphanumeric characters and thus decided to do the same. My best guess is that they do this from the perspective of user experience.</p>
<p>Existing methodology for generating random string.</p>
<p>Following is a function that generates alphabetic characters of a given length. It's Rust code but fairly easy to understand.</p>
<p>It is provided with a collection of alphabetic characters and it generates a random index and maps it to the equivalent character.</p>
<pre><code class="lang-rust"><span class="hljs-function"><span class="hljs-keyword">fn</span> <span class="hljs-title">generate_shortened_url</span></span>(length: <span class="hljs-built_in">usize</span>) -&gt; <span class="hljs-built_in">String</span> {
    <span class="hljs-keyword">const</span> CHARSET: &amp;[<span class="hljs-built_in">u8</span>] = <span class="hljs-string">b"abcdefghijklmnopqrstuvwxyz"</span>;

<span class="hljs-comment">//Run the loop for n times</span>
    (<span class="hljs-number">0</span>..length)
       .map(|_| {
<span class="hljs-comment">//generate a random number between 1 - charset.length</span>
            <span class="hljs-keyword">let</span> index = rand::thread_rng().gen_range(<span class="hljs-number">1</span>..CHARSET.len());

<span class="hljs-comment">//Pickout a character from the charset with the randomly generated index</span>
            CHARSET[index] <span class="hljs-keyword">as</span> <span class="hljs-built_in">char</span>
        })
        .collect()
}
</code></pre>
<h3 id="heading-performance-uliduuid-vs-random-string-generator">Performance: ULID/UUID vs Random String Generator</h3>
<p>The performance of generating a 7-character random alphabetic string using the Rust function is faster than generating a UUID or a ULID.</p>
<p>This is because the function generates a random string by selecting characters from a pre-defined character set, which involves a simple operation of selecting a random index from the character set and converting it to a character.</p>
<p>In contrast, generating a UUID/ULID involves a more complex process of generating a random 128-bit value and encoding it in a specific format.</p>
<hr />
<h2 id="heading-current-deployment-of-rusly">Current Deployment of Rusly</h2>
<p>Rusly is currently deployed on <a target="_blank" href="https://railway.app">Railway</a> directly from its <a target="_blank" href="https://github.com/WilfredAlmeida/rusly">GitHub repo</a></p>
<h2 id="heading-performance-metrics">Performance Metrics</h2>
<p>Metrics are on their way. If you help with benchmarking then <a target="_blank" href="https://links.wilfredalmeida.com">reach out to me</a>.</p>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>Kudos to you that you read till the end. That's all for this blog.</p>
<p>I can go on and on about more system designing, <a target="_blank" href="https://links.wilfredalmeida.com">reach out to me</a> if you got something in mind.</p>
<p>Check out my other blogs at <a target="_blank" href="https://blog.wilfredalmeida.com/">blog.wilfredalmeida.com/</a></p>
]]></content:encoded></item><item><title><![CDATA[Running Harbor Locally]]></title><description><![CDATA[If you're willing to contribute to Harbor and/or need its code up & running on your local dev environment then follow along, this guide will get you started.
Harbor is an open-source registry that secures artifacts with policies and role-based access...]]></description><link>https://blog.wilfredalmeida.com/running-harbor-locally</link><guid isPermaLink="true">https://blog.wilfredalmeida.com/running-harbor-locally</guid><category><![CDATA[harbor]]></category><category><![CDATA[Docker]]></category><category><![CDATA[containers]]></category><category><![CDATA[Docker compose]]></category><category><![CDATA[docker-registry]]></category><dc:creator><![CDATA[Wilfred Almeida]]></dc:creator><pubDate>Thu, 30 Mar 2023 09:03:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1680155334247/a981d619-ba75-4fd7-8bc9-48c9bac30b6c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you're willing to contribute to Harbor and/or need its code up &amp; running on your local dev environment then follow along, this guide will get you started.</p>
<p><a target="_blank" href="https://goharbor.io/">Harbor</a> is an open-source registry that secures artifacts with policies and role-based access control, ensures images are scanned and free from vulnerabilities, and signs images as trusted.</p>
<p>Here's a <a target="_blank" href="https://youtu.be/1U9v2QmERaA">video</a> I recorded for the same.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680597208246/8392e747-c944-4a5b-baff-7f88d35c0acb.jpeg" alt class="image--center mx-auto" /></p>
<h3 id="heading-device-hardware-specs-recommended">Device Hardware Specs (Recommended)</h3>
<p>Before you begin the heavy lifting, you need some hardware juice. Following are some recommendations</p>
<p><em>Processor</em>: 4 Core Intel i5/i7 10th gen plus or equivalent<br /><em>RAM</em>: 8GB workable, more is good<br /><em>Storage</em>: 40-60GB (Docker Images take up space)</p>
<blockquote>
<p>If your device doesn't have at least the above specs, maybe try using a cloud VM or GitHub Codespaces or GitPod. There are some great free trials/referral bonuses on providers👨‍💻</p>
<p>Use these links to get started<br /><a target="_blank" href="https://hetzner.cloud/?ref=pCRmvzCoA0hO">Hetzner</a> (Recommended, I use it. Cheap &amp; Powerful)<br /><a target="_blank" href="https://m.do.co/c/c3c6f5ac727e">DigitalOcean</a></p>
</blockquote>
<h3 id="heading-software-requirements">Software Requirements</h3>
<p>Following is a list of software with versions you need as of March 2023 for version <code>2.7.0</code>. Check the official <a target="_blank" href="https://goharbor.io/docs/2.7.0/build-customize-contribute/compile-guide/">build guide</a> to get the latest version details.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Software</strong></td><td><strong>Required Version</strong></td></tr>
</thead>
<tbody>
<tr>
<td>docker</td><td>17.05 +</td></tr>
<tr>
<td>docker-compose</td><td>1.18.0 +</td></tr>
<tr>
<td>python</td><td>2.7 +</td></tr>
<tr>
<td>git</td><td>1.9.1 +</td></tr>
<tr>
<td>make</td><td>3.81 +</td></tr>
<tr>
<td>golang*</td><td>1.15.6 +</td></tr>
</tbody>
</table>
</div><p>*optional, required if you use your own Golang environment.</p>
<p>Speaking of OS, you need a Linux-based OS. I tried WSL, but it didn't work, builds get stuck. You can use virtualization with <a target="_blank" href="https://www.vmware.com/">VMware</a> or <a target="_blank" href="https://www.virtualbox.org/">Virtual Box</a> if you're on Windows. Even Mac-OS works.</p>
<p>Test your Go &amp; Docker installations before proceeding.</p>
<hr />
<h2 id="heading-forking-andamp-cloning">Forking &amp; Cloning</h2>
<p>Fork the official <a target="_blank" href="https://github.com/goharbor/harbor/">Harbor</a> repository to your account.</p>
<p>Following is a snippet from the <a target="_blank" href="https://github.com/goharbor/harbor/blob/main/CONTRIBUTING.md">CONTRIBUTING.md</a> file. Read the comments.</p>
<pre><code class="lang-bash"><span class="hljs-comment">#Set golang environment</span>
<span class="hljs-built_in">export</span> GOPATH=<span class="hljs-variable">$HOME</span>/go <span class="hljs-comment"># Add this to your .bashrc/equivalent file</span>
mkdir -p <span class="hljs-variable">$GOPATH</span>/src/github.com/goharbor

<span class="hljs-comment">#Get code</span>
git <span class="hljs-built_in">clone</span> --depth=1 git@github.com:goharbor/harbor.git <span class="hljs-comment"># Add URL of your fork here</span>
<span class="hljs-comment"># Set the depth, you don't need everything</span>
<span class="hljs-built_in">cd</span> <span class="hljs-variable">$GOPATH</span>/src/github.com/goharbor/harbor

<span class="hljs-comment"># The below steps can be performed later, you can skip if you want to</span>

<span class="hljs-comment">#Track repository under your personal account</span>
git config push.default nothing <span class="hljs-comment"># Anything to avoid pushing to goharbor/harbor by default</span>
git remote rename origin goharbor
git remote add <span class="hljs-variable">$USER</span> git@github.com:<span class="hljs-variable">$USER</span>/harbor.git
git fetch <span class="hljs-variable">$USER</span>
</code></pre>
<blockquote>
<p><strong>Note:</strong> GOPATH can be any directory, the example above uses $HOME/go. Change $USER above to your own GitHub username.</p>
</blockquote>
<hr />
<h2 id="heading-prerequisite-yaml-config">Prerequisite YAML Config</h2>
<p>Navigate to the <code>$GOPATH/src/github.com/goharbor/harbor</code> directory. To get Harbor running, first, a YAML config needs to be done in the <code>make</code> directory.</p>
<p>If you want to take a look at the codebase you can open it in VS Code or your favorite IDE/editor.</p>
<p>A sample YAML config file is provided as the <code>make/harbor.yml.tmpl</code></p>
<p>Make a copy of this file or you can rename it as well, I'd recommend you make a copy.</p>
<p>The file should be named <code>harbor.yml</code></p>
<p>Open the file in your favorite text editor.</p>
<p>The file looks something like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680110671708/187b5c21-347d-4196-9ba3-193b59e92d94.png" alt class="image--center mx-auto" /></p>
<p>Let's understand the config changes needed</p>
<ul>
<li><p><code>hostname</code>: Line 5. Hostname to access admin UI. You can provide any non-conflicting URL here</p>
</li>
<li><p>HTTP(S) ports: Lines 10 &amp; 15. The HTTP ports that harbor will run on</p>
</li>
<li><p><code>certificate</code> &amp; <code>private_key</code>: Lines 17, 18. The SSL certificate &amp; key files</p>
</li>
</ul>
<p>Following is how my config looks followed by an explanation</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680111020482/b726fe58-a9a5-400d-9d8a-24c07284f56a.jpeg" alt class="image--center mx-auto" /></p>
<ol>
<li><p>The hostname is a demo subdomain on my domain, it's not live. <em>(Don't hit it hehe)</em></p>
</li>
<li><p>My development environment is a server VM so ports 80 &amp; 443 are occupied, so I chose different ports.</p>
</li>
<li><p>The certificate &amp; key paths. Their generation is explained in the following section.</p>
</li>
</ol>
<h3 id="heading-generating-ssl-certificate-andamp-key">Generating SSL Certificate &amp; Key</h3>
<p>To generate, you need <a target="_blank" href="https://www.openssl.org/">OpenSSL</a> installed. You can generate by any other way also, if you're clueless, just follow along.</p>
<p>Navigate to the directory you want the credentials to be generated and execute the following command. Replace <code>demohub.wilfredalmeida.com</code> with the hostname you specified above.</p>
<pre><code class="lang-bash">openssl req -x509 \
            -sha256 -days 356 \
            -nodes \
            -newkey rsa:2048 \
            -subj <span class="hljs-string">"/CN=demohub.wilfredalmeida.com/C=US/L=San Fransisco"</span> \
            -keyout rootCA.key -out rootCA.crt
</code></pre>
<p>Two files named <code>rootCA.key</code> and <code>rootCA.crt</code> will be generated, specify their paths accordingly in the config and save and exit.</p>
<hr />
<h2 id="heading-building-and-running-harbor">Building and Running Harbor</h2>
<p>The bare minimum config needed to get Harbor up &amp; running is now complete.</p>
<p>To run harbor, execute the following command in the project root.</p>
<pre><code class="lang-bash">make install
</code></pre>
<p>There are other options as well to run this, check out the <a target="_blank" href="https://goharbor.io/docs/2.7.0/build-customize-contribute/compile-guide/">compile guide</a></p>
<blockquote>
<p>Note: This command will take time to execute. It takes ~25 - 30 mins for me, it depends on your hardware juice. Be patient.</p>
</blockquote>
<p>If you're running on WSL, your build might get stuck at the following stage for which I have no solution</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680113191716/cc3c0a40-18df-47f6-b28e-01e7837ec6fc.png" alt class="image--center mx-auto" /></p>
<p>Once this build finishes, you'll see a message similar to the following</p>
<pre><code class="lang-sh">...
Start complete. You can visit harbor now.
</code></pre>
<h3 id="heading-verifying-installation">Verifying Installation</h3>
<p>If the build is completed successfully after the eternity it takes, you can load harbor UI in your browser by going to the HTTPS port on your localhost.</p>
<p>Here's an example <code>https://localhost:2400</code>. Note that the URL has HTTPS.</p>
<p>You might get a certificate warning in the browser due to the self-signed certificate, ignore that and proceed.</p>
<p>If Harbor loads in your browser then viola, you did it. Good Job!!🤝</p>
<h3 id="heading-stopping-harbor">Stopping Harbor</h3>
<p>To stop harbor, run the following command. It'll stop all running containers for harbor.</p>
<pre><code class="lang-bash">make down
</code></pre>
<hr />
<h2 id="heading-testing-code-changes">Testing code changes</h2>
<p>If you want to make some code changes and check them, then run the following command</p>
<pre><code class="lang-bash">make versions_prepare compile_core  build -e BUILDTARGET=<span class="hljs-string">"_build_core"</span> -e PULL_BASE_FROM_DOCKERHUB=<span class="hljs-literal">false</span> prepare start
</code></pre>
<p>This command will compile the harbor-core image, build the necessary components for it and redeploy harbor.</p>
<p>If you're working on some other image like the <code>harbor-db</code> or <code>harbor-portal</code>, specify/replace it at the place of <code>compile_core</code> in the command.</p>
<p>It depends on your changes, however, the command takes ~2-3 mins to complete for me.</p>
<p>So in this comparatively little time, you can see your changes live</p>
<p>This command is courtesy of <a target="_blank" href="https://twitter.com/wy65701436">Wang Yan</a>. It saves the hassle of executing <code>make install</code></p>
<p>25 mins just to see 2 lines of code changes is not a pleasant experience.</p>
<blockquote>
<p>Been there, done that. Not at all a pleasant experience🥹😆</p>
</blockquote>
<hr />
<p>That's all. Hope you're able to get Harbor up &amp; running now.</p>
<p>For any queries, help join the Harbor Slack channels at the official <a target="_blank" href="http://cloud-native.slack.com">CNCF Slack</a>.</p>
<p>Reach out to me on <a target="_blank" href="https://twitter.com/WilfredAlmeida_">Twitter</a> or elsewhere from my <a target="_blank" href="https://wilfredalmeida.com/">portfolio</a>.</p>
<p>Like my content? Check out my other <a target="_blank" href="https://blog.wilfredalmeida.com/">blogs</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Installing & Setting up ArgoCD on Kubernetes]]></title><description><![CDATA[In simple terms, ArgoCD detects an event and triggers a Kubernetes deployment to achieve a desired state.

This blog is the final piece of the Custom CI/CD Pipeline series for my project ChaturMail: AI Email Generator📧

ArgoCD detects some change in...]]></description><link>https://blog.wilfredalmeida.com/installing-setting-up-argocd-on-kubernetes</link><guid isPermaLink="true">https://blog.wilfredalmeida.com/installing-setting-up-argocd-on-kubernetes</guid><category><![CDATA[WeMakeDevs]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[Devops]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[@wemakedev]]></category><dc:creator><![CDATA[Wilfred Almeida]]></dc:creator><pubDate>Mon, 27 Feb 2023 17:46:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1677519407258/40d42a2a-bd4c-4561-9d19-00f52859db0d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In simple terms, ArgoCD detects an event and triggers a Kubernetes deployment to achieve a desired state.</p>
<blockquote>
<p>This blog is the final piece of the <a target="_blank" href="https://blog.wilfredalmeida.com/series/custom-ci-cd-pipeline">Custom CI/CD Pipeline</a> series for my project <a target="_blank" href="https://play.google.com/store/apps/details?id=com.wilfredalmeida.chaturmail">ChaturMail: AI Email Generator</a>📧</p>
</blockquote>
<p>ArgoCD detects some change in a GitHub repo and triggers a new Kubernetes deployment.</p>
<h2 id="heading-installing-argocd">Installing ArgoCD</h2>
<p>We'll install ArgoCD on Kubernetes using one simple command</p>
<blockquote>
<p>If you made it through the <a target="_blank" href="https://blog.wilfredalmeida.com/hosting-harbor-on-vps-using-nginx">Hosting Harbor on VPS using NGINX as Reverse Proxy</a> part then this is really easy😀</p>
</blockquote>
<p>Before ArgoCD is installed, a namespace needs to be created, run the following command to create a namespace</p>
<pre><code class="lang-bash">kubectl create namespace argocd
</code></pre>
<p>Now that the namespace is created, ArgoCD can be finally installed, run the following command to install it</p>
<pre><code class="lang-bash">kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<p>You'll see lots of stuff being created. Relax, it is all needed for ArgoCD to function properly.</p>
<p>Meanwhile, you can take a look at the YAML config in the link in the above command.</p>
<p>This will take some time for all the pods to set up and come up, check the status using the following command.</p>
<pre><code class="lang-bash">kubectl get pod -n
</code></pre>
<p>This command will list all existing pods and their status which will look something like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677511495067/029d9777-5bdb-4d0e-be9c-3ae58632a3c1.png" alt class="image--center mx-auto" /></p>
<p>This means that the pods are being created. So just wait.</p>
<blockquote>
<p>While you wait, checkout my other <a target="_blank" href="https://blog.wilfredalmeida.com/">blogs</a> hehe🙃</p>
</blockquote>
<p>Once all pods are up &amp; running, the status will look something like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677514856082/97b4e145-df0d-4804-9939-fc3933b00817.png" alt class="image--center mx-auto" /></p>
<p>Hooray!! ArgoCD is not installed</p>
<h2 id="heading-accessing-argocd-via-web-gui">Accessing ArgoCD via Web GUI</h2>
<p>ArgoCD is exposed via a service, to get all the services, run the following command</p>
<pre><code class="lang-bash">kubectl get svc -n argocd
</code></pre>
<p>This will list all available commands in the <code>argocd</code> namespace which will look something like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677515142762/4e956d63-6b76-4057-af42-60ce045eeaa5.jpeg" alt class="image--center mx-auto" /></p>
<p>The service <code>argocd-server</code> needs to be port forwarded to access ArgoCD. Run the following command to do so</p>
<pre><code class="lang-bash">kubectl port-forward -n argocd svc/argocd-server 8080:443
</code></pre>
<p>This command will set up a port forwarding from the port <code>443</code> of the <code>argocd-server</code> service to the port <code>8080</code> of our local host.</p>
<p>Now we can access ArgoCD at our localhost port 8080, load one of the following URLs in your browser</p>
<pre><code class="lang-plaintext">http://localhost:8080
http://127.0.0.1:8080
</code></pre>
<p>You will get a warning because the SSL certificate is self-signed, ignore the warning and proceed.</p>
<p>You'll land on the login page something like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677515540366/5712df8b-c393-4de2-92c4-7f648acf177b.png" alt class="image--center mx-auto" /></p>
<p>Here, the username is <code>admin</code> and password is an auto-generated secret. To get the password run the following command</p>
<pre><code class="lang-bash">kubectl -n argocd get secret argocd-initial-admin-secret -o yaml
</code></pre>
<p>This will output a password something like this. The password is base64 encoded and we need to decode it before it can be used</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677515786908/9e1bdb19-c734-4c50-8934-6ac834eea7db.png" alt class="image--center mx-auto" /></p>
<p>Run the following command to decode it, additionally, you can decode it anyhow you want you just need a base64 decoder</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> &lt;super-secret-password-copied-from-above&gt; | base64 --decode
</code></pre>
<p>Now the output is your password, keep it safe and login on to the web GUI, it'll look something like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677516568622/45fb22fb-8570-471a-b961-86b7777a34ce.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-deploying-a-project-from-gui">Deploying a project from GUI</h2>
<p>Click on create a new app and then fill out all the necessary details like this. The important one here is <code>SYNC POLICY</code> which decides when to sync the changes.</p>
<ul>
<li><p><code>PRUNE RESOURCES</code>: If you had some resource in your git repo and if you remove it, it'll be deleted from here as well.</p>
</li>
<li><p><code>SELF HEAL</code>: Changes via git repo are only allowed if you make any changes to pods using <code>kubectl</code> or anything else, they'll be reverted.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677517250455/08df8e26-05c3-4cc0-bb70-307313c20f2d.png" alt class="image--center mx-auto" /></p>
<p>Now you need to provide the git repo to watch &amp; the K8S YAML config to execute. Note that all YAML configs in the specified path will be applied.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677517593909/8b040816-bd16-4428-8f45-17f011dc2989.png" alt class="image--center mx-auto" /></p>
<p>Now we need to specify the Kubernetes namespace to deploy to</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677517907110/b0067c45-f98b-4cfe-b3f4-defd7fd3faed.png" alt class="image--center mx-auto" /></p>
<p>That's all, click Create and your app will be created and ArgoCD will check the given git repo every 3 minutes for changes. Once it detects any changes, it'll apply the YAML config from the provided path.</p>
<h2 id="heading-deploying-a-project-from-cli">Deploying a project from CLI</h2>
<p>You can also create an app from CLI, all you need is some YAML like the following. Explaining the directives is out of the scope of this blog, refer to the <a target="_blank" href="https://argo-cd.readthedocs.io">official docs</a>.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">argoproj.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Application</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">myapp-argo-application</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">argocd</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">project:</span> <span class="hljs-string">default</span>

  <span class="hljs-attr">source:</span>
    <span class="hljs-attr">repoURL:</span> <span class="hljs-string">https://github.com/WilfredAlmeida/MobXcess-Backend-Golang</span>
    <span class="hljs-attr">targetRevision:</span> <span class="hljs-string">HEAD</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">dev</span>
  <span class="hljs-attr">destination:</span> 
    <span class="hljs-attr">server:</span> <span class="hljs-string">https://kubernetes.default.svc</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">myapp</span>

  <span class="hljs-attr">syncPolicy:</span>
    <span class="hljs-attr">syncOptions:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">CreateNamespace=true</span>

    <span class="hljs-attr">automated:</span>
      <span class="hljs-attr">selfHeal:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">prune:</span> <span class="hljs-literal">true</span>
</code></pre>
<p>To apply this config, you just need to run 1 command like this</p>
<pre><code class="lang-bash">kubectl apply -f myYaml.yaml
</code></pre>
<p>And voila!! that's all!!!</p>
<h3 id="heading-whats-next">What's Next?</h3>
<p>This blog concludes my <a target="_blank" href="https://blog.wilfredalmeida.com/series/custom-ci-cd-pipeline">Custom CI/CD Pipeline</a> series for my project <a target="_blank" href="https://play.google.com/store/apps/details?id=com.wilfredalmeida.chaturmail">ChaturMail: AI Email Generator</a>📧. Reach out to me on <a target="_blank" href="https://twitter.com/WilfredAlmeida_">Twitter</a> or anywhere else if you need any help. Check out my other <a target="_blank" href="https://blog.wilfredalmeida.com">blogs</a> &amp; follow for more.</p>
]]></content:encoded></item><item><title><![CDATA[Hosting Harbor on VPS using NGINX as Reverse Proxy]]></title><description><![CDATA[Harbor is an open-source container registry that can be used to store Docker images. It's an open-source alternative to Docker Hub to host Docker images.
This task is part of the Custom CI/CD Pipeline for the application ChaturMail: AI Email Generato...]]></description><link>https://blog.wilfredalmeida.com/hosting-harbor-on-vps-using-nginx</link><guid isPermaLink="true">https://blog.wilfredalmeida.com/hosting-harbor-on-vps-using-nginx</guid><category><![CDATA[harbor]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><category><![CDATA[WeMakeDevs]]></category><category><![CDATA[nginx]]></category><dc:creator><![CDATA[Wilfred Almeida]]></dc:creator><pubDate>Sat, 11 Feb 2023 06:59:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1676098997087/29326f7b-e9bb-4d86-a6cc-4f1775b0004e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://goharbor.io/">Harbor</a> is an open-source container registry that can be used to store Docker images. It's an open-source alternative to <a target="_blank" href="https://hub.docker.com/">Docker Hub</a> to host Docker images.</p>
<p>This task is part of the <a target="_blank" href="https://blog.wilfredalmeida.com/series/custom-ci-cd-pipeline"><strong>Custom CI/CD Pipeline</strong></a> for the application <a target="_blank" href="https://play.google.com/store/apps/details?id=com.wilfredalmeida.chaturmail"><strong>ChaturMail: AI Email Generator</strong></a></p>
<p>I was trying to store the NodeJS docker images on Docker Hub for Kubernetes to fetch but the upload speed to the images was around 500KBps and my image was over 500MB in size. Docker Hub also had other restrictions which led me to discover Harbor as an alternative.</p>
<p><strong><em>Note:</em></strong> <em>Kubernetes is referred to as K8S in a short form. Why 'K8S' because there are 8 characters between K &amp; S.</em></p>
<h2 id="heading-configuring-andamp-installing-harbor">Configuring &amp; Installing Harbor</h2>
<h3 id="heading-getting-the-helm-chart">Getting the Helm Chart</h3>
<p>Harbor is installed on k8s via a Helm chart by Bitnami. First, the Bitnami Helm repo needs to be added as follows</p>
<pre><code class="lang-bash">helm repo add bitnami https://charts.bitnami.com/bitnami
</code></pre>
<p>Now Harbor can be installed. Run the following command that'll get the YAML config. It is recommended to read the config and understand it before proceeding.</p>
<pre><code class="lang-bash">helm show values bitnami/harbor &gt; harbor-values.yaml
</code></pre>
<p>This command will save the config in the file names <code>harbor-values.yaml</code>. It needs to be edited to set some configurations.</p>
<h3 id="heading-helm-chart-configuration">Helm chart configuration</h3>
<p>Find and set the following parameters</p>
<ul>
<li><p><code>service.type: NodePort</code> - The NGINX proxy service type. The default is set to <code>LoadBalancer</code> which overtook the NGINX of my VPS and my whole VPS became a Harbor service, which isn't desired. Harbor needs to be a k8s service on localhost. Read the <a target="_blank" href="https://github.com/bitnami/charts/tree/main/bitnami/harbor/#configure-the-way-how-to-expose-harbor-core">docs</a> to understand more</p>
<p>  This should look like this in your file</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676032706765/b2677931-8b6a-41ae-bc53-a235813888be.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><code>externalURL: https://hub.example.com</code> - The external URL for Harbor Core service. This is the URL the docker images will be tagged with</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675400885760/3d3eaecb-976f-4913-9f3c-2c73af144943.png" alt class="image--center mx-auto" /></p>
<p>  .</p>
</li>
<li><p><code>adminPassword: admin</code> - The initial password of Harbor admin. Change it from the portal after launching Harbor</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675400806796/198fe654-e1f0-4ff8-91d3-fd3a6cb5a3fd.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><code>commonName: hub.example.com</code> - The common name used to generate the self-signed TLS certificates</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675400592003/faaa55db-ff1d-4ce7-b38a-64e99f0edadd.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>That's all for the parameters. Now let's run the installation command</p>
<h3 id="heading-installation">Installation</h3>
<pre><code class="lang-bash">helm install harbor bitnami/harbor --values harbor-values.yml -n harbor --create-namespace
</code></pre>
<p>The above command will install harbor, be patient. Till it installs, let's understand the command</p>
<h3 id="heading-understanding-the-installation-command">Understanding the installation command</h3>
<p><code>helm install harbor bitnami/harbor</code>: This indicates that we want to install harbor using the bitnami helm chart</p>
<p><code>--values harbor-values</code>: This specifies the config values. The harbor-values.yml has edited config which will be used. If not specified, the helm chart with default values will be installed</p>
<p><code>-n harbor --create-namespace</code>: Specifies the namespace in which harbor will be installed. It's 'harbor' in this case. The <code>--create-namespace</code> option will create the namespace if it doesn't exist already</p>
<p>To see the install progress, check the pod status</p>
<pre><code class="lang-bash">kubectl get pods -n harbor
</code></pre>
<p>Check the created services, a service with the name <code>harbor</code> will be created with an IP. This is the IP of the main harbor service. Note that the IP may vary.</p>
<h3 id="heading-accessing-installed-harbor">Accessing installed harbor</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676088863029/63a0c9f4-446f-410d-b8d6-3a1e4aaae993.png" alt class="image--center mx-auto" /></p>
<p>If you load the IP in a browser or curl it, you'll get the harbor dashboard</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676088931731/54d86358-c0ee-438e-aae4-00b71dff7daf.png" alt class="image--center mx-auto" /></p>
<p><strong>Note:</strong> The IP is local, it won't load if you try to load your VPS's IP.</p>
<p>You might get an SSL warning from the browser. This is because the SSL certificate is self-signed and in the following section we'll make it trusted.</p>
<p>Enter the credentials set earlier and you'll be logged in to the dashboard.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676089248291/7fc576bf-5cf2-4e2b-89dd-a02d2f8d8604.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-creating-a-new-harbor-project">Creating a new harbor project</h3>
<p>Create a new project with the following config</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676096218315/431213b9-96f9-4789-b12c-d433ac526228.png" alt class="image--center mx-auto" /></p>
<p>Click on the created project and click on the 'PUSH COMMAND' option, it'll show sample commands to tag &amp; push images</p>
<h2 id="heading-making-harbor-available">Making harbor available</h2>
<p>Firstly, the <code>commonName</code> parameter set above needs to be mapped with the IP address of the service, to do so, it needs to be added to the <code>/etc/hosts</code> file. Open the file with your favorite editor and add the following line. Replace the IP address with the IP of your harbor service.</p>
<pre><code class="lang-plaintext">127.0.0.1    localhost
127.0.1.1    v2202210144615205508.goodsrv.de v2202210144615205508

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
89.58.55.191    v2202210144615205508.goodsrv.de v2202210144615205508

10.43.248.134 hub.example.com
</code></pre>
<p>Now, if you load/curl 'hub.example.com' then harbor will load. Note that it'll happen only on your VPS and not on the internet. You might see an SSL certificate error, this is because the certificate is self-signed. More on this is in the following section.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676096924462/83bb3b7c-603f-44f7-be31-8409a2ed94f4.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-trusting-harbor-certificate">Trusting harbor certificate</h2>
<p>The docker daemon needs to trust the certificate, to do this the <code>/etc/docker/daemon.json</code> needs to be edited, open it with your favorite editor and add the following config</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"insecure-registries"</span>:[<span class="hljs-string">"hub.example.com"</span>]
}
</code></pre>
<p>After saving this file, restart docker. If you have systemctl then use the following command</p>
<pre><code class="lang-bash">systemctl restart docker
</code></pre>
<p>Now if you try to push, docker will give an unauthorized error. To solve this, perform a docker login using the following command</p>
<pre><code class="lang-bash">docker login hub.example.com
</code></pre>
<p>Enter your credentials and you'll be logged in and now you can push images. Make sure your image is properly tagged.</p>
<p>Check your harbor dashboard and you'll find the image.</p>
<h2 id="heading-nginx-config">NGINX Config</h2>
<p>Now you can use harbor. To access it via the internet, it needs to be exposed via NGINX. The following config will do it</p>
<pre><code class="lang-nginx">server{

    <span class="hljs-attribute">listen</span> <span class="hljs-number">443</span> ssl;

    <span class="hljs-attribute">server_name</span> hub.example.com;

    <span class="hljs-attribute">location</span> / {

        <span class="hljs-attribute">proxy_pass</span> https://hub.example.com;
        <span class="hljs-attribute">proxy_http_version</span> <span class="hljs-number">1</span>.<span class="hljs-number">1</span>;
        <span class="hljs-attribute">proxy_set_header</span>   Host               <span class="hljs-variable">$host</span>:<span class="hljs-variable">$server_port</span>;
        <span class="hljs-attribute">proxy_set_header</span>   X-Real-IP          <span class="hljs-variable">$remote_addr</span>;
        <span class="hljs-attribute">proxy_set_header</span>   X-Forwarded-For    <span class="hljs-variable">$proxy_add_x_forwarded_for</span>;
        <span class="hljs-attribute">proxy_set_header</span>   X-Forwarded-Proto  <span class="hljs-variable">$scheme</span>;

    }
}
</code></pre>
<p>The config might be needed to adjust as per your environment.</p>
<p>Note that the value given to the <code>proxy_pass</code> directive is the URL set in the <code>/etc/hosts</code> file. It doesn't point to a URL on the internet, it gets resolved locally.</p>
<h3 id="heading-enabling-ssl">Enabling SSL</h3>
<p>My domain &amp; VPS are mapped by Cloudflare, I use certbot. If you're using certbot, the following command will generate SSL given that the IP of your VPS is added as an A record in the DNS settings of your domain</p>
<pre><code class="lang-bash">certbot --nginx -d hub.example.com
</code></pre>
<h3 id="heading-conclusion">Conclusion</h3>
<p>That's all it takes to get harbor up and running. If you need any help/want to connect with me, reach out on <a target="_blank" href="https://twitter.com/WilfredAlmeida_">Twitter</a>.</p>
<p>Check out the pipeline <a target="_blank" href="https://blog.wilfredalmeida.com/custom-ci-cd-pipeline"><strong>series</strong></a> of blogs.</p>
<p>Check out <a target="_blank" href="https://play.google.com/store/apps/details?id=com.wilfredalmeida.chaturmail"><strong>ChaturMail: AI Email Generator</strong></a></p>
<p><em>Keep your secrets safe :-)</em></p>
]]></content:encoded></item><item><title><![CDATA[GitHub Action to Publish Docker Images on Harbor]]></title><description><![CDATA[This task is part of the Custom CI/CD Pipeline for the application ChaturMail: AI Email Generator
The GitHub action does the following tasks:

Builds & Pushes Docker Image to Harbor

Update ArgoCD Config


tl;dr following is the complete YAML for the...]]></description><link>https://blog.wilfredalmeida.com/github-action-to-publish-docker-images-on-harbor</link><guid isPermaLink="true">https://blog.wilfredalmeida.com/github-action-to-publish-docker-images-on-harbor</guid><category><![CDATA[harbor]]></category><category><![CDATA[WeMakeDevs]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[GitHub Actions]]></category><category><![CDATA[ci-cd]]></category><dc:creator><![CDATA[Wilfred Almeida]]></dc:creator><pubDate>Wed, 11 Jan 2023 18:24:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673461161991/3526d905-d2ce-4de8-bf80-575faaa76c0e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This task is part of the <a target="_blank" href="https://blog.wilfredalmeida.com/series/custom-ci-cd-pipeline">Custom CI/CD Pipeline</a> for the application <a target="_blank" href="https://play.google.com/store/apps/details?id=com.wilfredalmeida.chaturmail">ChaturMail: AI Email Generator</a></p>
<p>The GitHub action does the following tasks:</p>
<ol>
<li><p>Builds &amp; Pushes Docker Image to Harbor</p>
</li>
<li><p>Update ArgoCD Config</p>
</li>
</ol>
<p>tl;dr following is the complete YAML for the action followed by an explanation</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">BuildAndPushImageOnHarborAndUpdateArgoCDConfig</span>

<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">"master"</span> ]

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/login-action@v1</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">registry:</span> <span class="hljs-string">harbor.example.com</span>
        <span class="hljs-attr">username:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.HARBOR_USERNAME</span>  <span class="hljs-string">}}</span>
        <span class="hljs-attr">password:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.HARBOR_PASSWORD</span> <span class="hljs-string">}}</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v3</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">BuildAndPushImageOnHarbor</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|
        docker build ./ -t harbor.example.com/chaturmail/chaturmail-backend:${{ github.run_number }}
        docker push harbor.example.com/chaturmail/chaturmail-backend:${{ github.run_number }}
</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Clone</span> <span class="hljs-string">Repository</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|
        git clone &lt;argocd-config-repo-url&gt;
</span>    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">yq</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|
        sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
        sudo chmod a+x /usr/local/bin/yq
</span>    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Update</span> <span class="hljs-string">YAML</span> <span class="hljs-string">File</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|
        yq -i '.spec.template.spec.containers[0].image = "harbor.example.com/chaturmail/chaturmail-backend:${{ github.run_number }}"' 'argocd-configs/chaturmail-pod.yaml'
</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Push</span> <span class="hljs-string">to</span> <span class="hljs-string">Repo</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|
        git config --global user.name "${{secrets.USERNAME_GITHUB}}"
        git config --global user.email "${{secrets.EMAIL_GITHUB}}"
        cd argocd-test-configs
        git add .
        git commit -m "Updated by GitHub Actions"
        git push &lt;argocd-config-repo-url&gt; --all</span>
</code></pre>
<h2 id="heading-building-and-pushing-docker-image-to-harbor">Building and Pushing Docker Image to Harbor</h2>
<p>Harbor is self-hosted on a VPS server and is exposed as a registry.</p>
<h3 id="heading-logging-in-to-harbor">Logging in to Harbor</h3>
<p>Login is done using the docker login action. The registry URI, username, and password need to be provided. Since Harbor is a docker registry the login works the same as for Docker Hub.</p>
<pre><code class="lang-yaml">    <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/login-action@v1</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">registry:</span> <span class="hljs-string">harbor.example.com</span>
        <span class="hljs-attr">username:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.HARBOR_USERNAME</span>  <span class="hljs-string">}}</span>
        <span class="hljs-attr">password:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.HARBOR_PASSWORD</span> <span class="hljs-string">}}</span>
</code></pre>
<h3 id="heading-building-amp-pushing-the-image">Building &amp; Pushing the Image</h3>
<p>Shell commands are executed to update and push.</p>
<p>The image is tagged with the desired string which is of the format<br /><code>&lt;registry-uri&gt;/&lt;harbor-project-name&gt;/&lt;image-tag&gt;:&lt;github.run.number&gt;</code></p>
<p>The <code>github.run_number</code> is added to keep the tags unique for each image.</p>
<p>Since the action runs on the <code>master</code> branch, the directory is specified as <code>./</code></p>
<pre><code class="lang-yaml">    <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v3</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">BuildAndPushImageOnHarbor</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|
        docker build ./ -t harbor.example.com/chaturmail/chaturmail-backend:${{ github.run_number }}
        docker push harbor.example.com/chaturmail/chaturmail-backend:${{ github.run_number }}</span>
</code></pre>
<h2 id="heading-updating-argocd-config">Updating ArgoCD Config</h2>
<p>YAML config for ArgoCD to execute is maintained in a GitHub repo. ArgoCD watches the repo. Any changes to the master branch of the config repo trigger a deployment on the server.</p>
<p>The action updates the image name in the YAML config. ArgoCD detects this change and does a deployment using the config file with the new image name.</p>
<p>A shell utility <code>yq</code> is used to edit YAML.</p>
<pre><code class="lang-yaml">    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Clone</span> <span class="hljs-string">Repository</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|
        git clone &lt;argocd-config-repo-url&gt;
</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">yq</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|
        sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
        sudo chmod a+x /usr/local/bin/yq
</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Update</span> <span class="hljs-string">YAML</span> <span class="hljs-string">File</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|</span>
        <span class="hljs-string">yq</span> <span class="hljs-string">-i</span> <span class="hljs-string">'.spec.template.spec.containers[0].image = "harbor.example.com/chaturmail/chaturmail-backend:$<span class="hljs-template-variable">{{ github.run_number }}</span>"'</span> <span class="hljs-string">'argocd-configs/chaturmail-pod.yaml'</span>
</code></pre>
<h2 id="heading-pushing-argocd-config-changes">Pushing ArgoCD config changes</h2>
<p>Shell commands to login into GitHub, add changes to git staging, commit &amp; push</p>
<pre><code class="lang-yaml">    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Push</span> <span class="hljs-string">to</span> <span class="hljs-string">Repo</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|
        git config --global user.name "${{secrets.USERNAME_GITHUB}}"
        git config --global user.email "${{secrets.EMAIL_GITHUB}}"
        cd chaturmail-argocd-configs
        git add .
        git commit -m "Updated by GitHub Actions"
        git push &lt;argocd-config-repo-url&gt; --all</span>
</code></pre>
<p>That's all. Check out the pipeline <a target="_blank" href="https://blog.wilfredalmeida.com/custom-ci-cd-pipeline">series</a> of blogs.</p>
<p>Check out <a target="_blank" href="https://play.google.com/store/apps/details?id=com.wilfredalmeida.chaturmail">ChaturMail: AI Email Generator</a></p>
<p><em>Keep your secrets safe :-)</em></p>
]]></content:encoded></item><item><title><![CDATA[Auto-pull GitHub repo via Webhook on VPS]]></title><description><![CDATA[Webhooks are automated messages sent when something happens. Certain actions performed on a GitHub repo like push, pull request, star, etc. can trigger a webhook i.e. send a message about the occurred event.
My portfolio and resume are hosted on my V...]]></description><link>https://blog.wilfredalmeida.com/auto-pull-github-repo-via-webhook-on-vps</link><guid isPermaLink="true">https://blog.wilfredalmeida.com/auto-pull-github-repo-via-webhook-on-vps</guid><category><![CDATA[GitHub]]></category><category><![CDATA[webhooks]]></category><category><![CDATA[VPS Hosting]]></category><category><![CDATA[WeMakeDevs]]></category><dc:creator><![CDATA[Wilfred Almeida]]></dc:creator><pubDate>Thu, 29 Dec 2022 14:04:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1672322590227/63e21c68-485d-44bb-bb37-816d5aa039c6.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Webhooks are automated messages sent when something happens. Certain actions performed on a GitHub repo like push, pull request, star, etc. can trigger a webhook i.e. send a message about the occurred event.</p>
<p>My portfolio and resume are hosted on my VPS, every time I made some changes in my resume latex, I had to manually login into the server and update the PDF.</p>
<p>To automate this, I set up a GitHub action that compiles my updated Latex into PDF and commits the PDF into the repo.</p>
<p>Now to auto-fetch the new PDF on the server, I wrote an Express API in TypeScript that takes a pull of the repo.</p>
<p>Following is its code</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> express, {Application, Request, Response, NextFunction} <span class="hljs-keyword">from</span> <span class="hljs-string">'express'</span>;
<span class="hljs-keyword">import</span> {resolve} <span class="hljs-keyword">from</span> <span class="hljs-string">'node:path'</span>;
<span class="hljs-keyword">import</span> {execSync} <span class="hljs-keyword">from</span> <span class="hljs-string">'child_process'</span>;

<span class="hljs-comment">//Express App</span>
<span class="hljs-keyword">const</span> app: Application = express();

<span class="hljs-comment">//Webhook endpoint.</span>
<span class="hljs-comment">//Get's triggered by GitHub webhook</span>
app.post(<span class="hljs-string">'/'</span>,<span class="hljs-function">(<span class="hljs-params">req: Request,res: Response</span>)=&gt;</span>{

    cloneRepo();

    res.send(<span class="hljs-string">'Pulled the repo'</span>);
})

<span class="hljs-comment">//Server listening on port 5000</span>
app.listen(<span class="hljs-number">5000</span>, <span class="hljs-function">()=&gt;</span>{
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Server Running on port 5000"</span>);
})

<span class="hljs-comment">//Function to execute the git pull shell command</span>
<span class="hljs-keyword">const</span> cloneRepo = <span class="hljs-function">()=&gt;</span>{
    execSync(<span class="hljs-string">'git pull origin main'</span>, {
        stdio: [<span class="hljs-number">0</span>, <span class="hljs-number">1</span>, <span class="hljs-number">2</span>], 
        cwd: resolve(__dirname, <span class="hljs-string">'../../resume-files'</span>),
      })
}
</code></pre>
<p>The API gets triggered by GitHub when a push is made to my repo and it executes a shell command that pulls the changes from the source repo.</p>
<p>The snippet can be customized to perform any action.</p>
<p>Started it in the background using <a target="_blank" href="https://www.npmjs.com/package/pm2">PM2</a> and it works perfectly.</p>
<p><strong>Note:</strong> I've used NGINX as a reverse proxy to expose the API to the internet.</p>
]]></content:encoded></item><item><title><![CDATA[CI/CD Pipeline using GitHub Actions, Harbor Container Registry, ArgoCD, Kubernetes, and NGINX [Overview]]]></title><description><![CDATA[This pipeline is implemented in the backend system of ChaturMail: AI Email Generator
Understanding Overall Pipeline Flow


Code changes are pushed to the master branch on GitHub

The repo has a Dockerfile. GitHub Actions does the following tasks

Bui...]]></description><link>https://blog.wilfredalmeida.com/custom-ci-cd-pipeline</link><guid isPermaLink="true">https://blog.wilfredalmeida.com/custom-ci-cd-pipeline</guid><category><![CDATA[ci-cd]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[WeMakeDevs]]></category><category><![CDATA[BlogsWithCC]]></category><dc:creator><![CDATA[Wilfred Almeida]]></dc:creator><pubDate>Thu, 29 Dec 2022 06:50:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1672642634207/b2cdd1d9-a67b-4af5-a729-7bc931652218.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This pipeline is implemented in the backend system of <a target="_blank" href="https://play.google.com/store/apps/details?id=com.wilfredalmeida.chaturmail">ChaturMail: AI Email Generator</a></p>
<h2 id="heading-understanding-overall-pipeline-flow">Understanding Overall Pipeline Flow</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672482313295/bcb62eb1-070e-4f8b-9dab-8a4b76ab8bad.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>Code changes are pushed to the master branch on GitHub</p>
</li>
<li><p>The repo has a Dockerfile. GitHub Actions does the following tasks</p>
<ul>
<li><p>Build Docker Image</p>
</li>
<li><p>Push the image to Harbor Container Registry</p>
</li>
<li><p>Update YAML config being watched by ArgoCD</p>
</li>
</ul>
</li>
<li><p>The <a target="_blank" href="https://goharbor.io/">Harbor Container Registry</a> is hosted on my VPS, reverse proxied by NGINX, and domain mapped by Cloudflare. The GH action uploads the built image here</p>
</li>
<li><p>ArgoCD is installed on the VPS and watches a repo of YAML configs for Kubernetes. Any changes to the configs trigger a deployment by ArgoCD. The GH action updates the image tag in the YAML config</p>
</li>
<li><p>ArgoCD detects the change in YAML and initiates a K8S deployment</p>
</li>
<li><p>The docker image needed by K8S pods is fetched locally from Harbor</p>
</li>
<li><p>Kubernetes deploys and manages the pods w.r.t. to restarting, respawning, and load balancing</p>
</li>
</ol>
<h2 id="heading-role-of-nginx">Role of NGINX</h2>
<p><a target="_blank" href="https://www.nginx.com/">NGINX</a> acts as a gateway to the VPS. All services are exposed to the internet utilizing NGINX. In the pipeline, NGINX plays the following roles:</p>
<ul>
<li><p>Reverse proxies Harbor</p>
</li>
<li><p>Reverse proxies ChaturMail backend</p>
</li>
</ul>
<h2 id="heading-application-exposure">Application Exposure</h2>
<p>The backend system for ChaturMail is served by a Kubernetes load balancer service which gets an internal local-only IP address. This service is then reverse proxied by NGINX.</p>
<h2 id="heading-role-of-cloudflare">Role of Cloudflare</h2>
<p>The services and applications exposed by the VPS are all configured to work with subdomains of my main domain <a target="_blank" href="https://wilfredalmeida.com/">wilfredalmeida.com</a></p>
<p>The domain is managed by Cloudflare and its IP address is proxied which prevents exposing of the original VPS IP and works in favor of Cloudflare's analytics and security services.</p>
]]></content:encoded></item><item><title><![CDATA[Appwrite using NGINX as Reverse Proxy]]></title><description><![CDATA[Appwrite is a self-hosted backend-as-a-service platform that provides developers with all the core APIs required to build any application.
In the previous post, I discussed how to deploy Appwrite on Google Cloud. Now we'll see how to deploy on our Vi...]]></description><link>https://blog.wilfredalmeida.com/appwrite-using-nginx-as-reverse-proxy</link><guid isPermaLink="true">https://blog.wilfredalmeida.com/appwrite-using-nginx-as-reverse-proxy</guid><category><![CDATA[Appwrite]]></category><category><![CDATA[nginx]]></category><category><![CDATA[Reverse Proxy]]></category><category><![CDATA[Docker]]></category><category><![CDATA[WeMakeDevs]]></category><dc:creator><![CDATA[Wilfred Almeida]]></dc:creator><pubDate>Sat, 03 Dec 2022 04:51:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1670042501965/32027b36-9b17-41d9-aa01-ca69dea4c249.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://appwrite.io/"><strong>Appwrite</strong></a> is a self-hosted backend-as-a-service platform that provides developers with all the core APIs required to build any application.</p>
<p>In the previous post, I discussed how to deploy <a target="_blank" href="https://blog.wilfredalmeida.com/appwrite-on-google-cloud">Appwrite on Google Cloud</a>. Now we'll see how to deploy on our Virtual Private Server (VPS).</p>
<p>VPS servers usually host a lot of services so we won't deploy Appwrite on the default ports HTTP(80) and HTTPS(443). If deployed on default, all services will go down and only Appwrite will run or the other services won't let Appwrite run on these ports.</p>
<p>Instead, we'll deploy it on localhost and serve it using NGINX as a reverse proxy.</p>
<h3 id="heading-install-appwrite">Install Appwrite</h3>
<p>First, we'll install Appwrite on our server. Appwrite comes as a set of Docker containers. So Docker needs to be installed mandatorily. Depending on your server OS, install Docker by referring to the <a target="_blank" href="https://docs.docker.com/engine/install/">docs</a>.</p>
<p>Once docker is installed it takes only 1 command to get Appwrite installed.</p>
<p><strong>PS</strong>: Don't paste the command and agree to everything by default. Some configs while installing need to be changed.</p>
<p>After running the command it'll pull a.k.a download some container images whose speed will depend on your internet speed so be patient.</p>
<pre><code class="lang-bash">sudo docker run -it --rm \
    --volume /var/run/docker.sock:/var/run/docker.sock \
    --volume <span class="hljs-string">"<span class="hljs-subst">$(pwd)</span>"</span>/appwrite:/usr/src/code/appwrite:rw \
    --entrypoint=<span class="hljs-string">"install"</span> \
    appwrite/appwrite:1.1.1
</code></pre>
<p>Learn more about Appwrite installation from the <a target="_blank" href="https://appwrite.io/docs/installation">docs</a>.</p>
<p>While installing, Appwrite asks for HTTP and HTTPS ports for it to function which default to 80 and 443.</p>
<p>These need to be changed because if kept default all traffic sent to the VPS will be received by Appwrite and all other services hosted won't work.</p>
<p>So in the prompts, as shown below enter any ports from <code>1025 - 65535</code>. Note that these ports need to be unique and should not conflict with any other ports to choose from accordingly.</p>
<p><strong>PS</strong>: Don't choose common values like <code>3000</code>, <code>8000</code>, or <code>8080</code> because a lot of services default to these ports and will cause you problems later.</p>
<p>In our case let's choose ports <code>2021 -&gt; HTTP</code> and <code>2022 -&gt; HTTPS</code>. Make sure no other service is running on the ports you've chosen.</p>
<p><strong>Note</strong>: Further on port 2021 is referred to because I've used it here. Make sure you're using the port you've specified.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670036209024/8177cee9-7923-4810-87f2-6bfe30bf9f5b.png" alt class="image--center mx-auto" /></p>
<p>Rest all values can be kept default for now but should be changed when in production. Learn more about them from the <a target="_blank" href="https://appwrite.io/docs/installation">docs</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670036296684/8894b5b7-61de-4926-aeff-95f572e4966e.png" alt class="image--center mx-auto" /></p>
<p>Once all values are provided it'll take some time and Appwrite will be installed.</p>
<p>The console will load on the HTTP port specified above. To check if the console loads properly you can load it in the browser if your server has GUI or use <code>curl</code> as follows</p>
<pre><code class="lang-bash">curl http://localhost:2021/
</code></pre>
<p>If you see HTML code in your console then Appwrite has been installed successfully.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670037259985/7ff4285b-0933-416a-9f02-10e36f29b805.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-expose-by-opening-a-port">Expose by opening a port</h3>
<p>Now to serve it over the internet you can either open the port of your server or use a Reverse Proxy. It might also be directly available on <code>ip:2021</code> depending on the config of your server.</p>
<p>To open the port you need to allow it in your firewall/iptables. If you're using ufw then the following commands should work</p>
<pre><code class="lang-bash">sudo ufw allow 2021
sudo ufw reload
</code></pre>
<p>Now if you enter your server <code>ip:port</code> on your browser, Appwrite console will load. For eg. <code>172.0.0.45:2021</code></p>
<p>I'd recommend not opening any port and using a reverse proxy as it provides better control and security.</p>
<h3 id="heading-reverse-proxy-using-nginx">Reverse Proxy using NGINX</h3>
<p>To serve using a reverse proxy you'd usually need a domain configured pointing to your server.</p>
<p>NGINX configs are stored on the directory <code>/etc/nginx/sites-available</code>. Here you can create a new config file or write to an existing config file.</p>
<p>Using your favorite editor open the file. Here we'll be creating a new file named <code>appwrite</code> using <code>vi</code></p>
<pre><code class="lang-bash">sudo vi /etc/nginx/sites-available/appwrite
</code></pre>
<p>Write the NGINX config as follows</p>
<pre><code class="lang-nginx">server{
    <span class="hljs-attribute">listen</span> <span class="hljs-number">80</span>;
    <span class="hljs-attribute">server_name</span> example.com www.example.com;

    <span class="hljs-attribute">location</span> / {
        <span class="hljs-attribute">proxy_pass</span> http://localhost:2021;
        <span class="hljs-attribute">proxy_http_version</span> <span class="hljs-number">1</span>.<span class="hljs-number">1</span>;
        <span class="hljs-attribute">proxy_set_header</span>   Host               <span class="hljs-variable">$host</span>:<span class="hljs-variable">$server_port</span>;
        <span class="hljs-attribute">proxy_set_header</span>   X-Real-IP          <span class="hljs-variable">$remote_addr</span>;
        <span class="hljs-attribute">proxy_set_header</span>   X-Forwarded-For    <span class="hljs-variable">$proxy_add_x_forwarded_for</span>;
        <span class="hljs-attribute">proxy_set_header</span>   X-Forwarded-Proto  <span class="hljs-variable">$scheme</span>;

    }
}
</code></pre>
<p><em>If you're facing problems while pasting in the editor or exiting it, please learn more about it :)</em></p>
<p>To exit the vi editor do the following steps:</p>
<ul>
<li><p>Press Escape key</p>
</li>
<li><p>Type <code>:wq</code></p>
</li>
<li><p>Press Enter key</p>
</li>
</ul>
<p>Verify the config is syntactically correct using the command</p>
<pre><code class="lang-bash">sudo nginx -t
</code></pre>
<p>Following is the command using systemctl</p>
<pre><code class="lang-bash">sudo systemctl restart nginx
</code></pre>
<p>That's all, Appwrite console will load on the domain you specified.</p>
<p>Note the <code>proxy_pass http://localhost:2021;</code> directive. It is the main reverse proxy config. All traffic received to the domain you specified will be sent to port 2021. That's where we have Appwrite running. Learn more about NGINX from the <a target="_blank" href="http://nginx.org/en/docs/">docs</a></p>
<p>If your domain URL is a subdomain, just enter it as the <code>server_name</code> value in the config. Also, note the way it's entered above, without <code>http/s</code> and any trailing <code>/</code></p>
<p><strong>Note</strong>: If your domain and server have SSL configured then the console might not load unless you configure SSL.</p>
<h3 id="heading-configuring-ssl">Configuring SSL</h3>
<p>Depending on your server and domain, you'll need to handle SSL.</p>
<p>The following command will issue SSL using <code>certbot</code>. Learn more about certbot from the <a target="_blank" href="https://certbot.eff.org/">docs</a></p>
<pre><code class="lang-bash">sudo certbot --nginx -d example.com
</code></pre>
<p><strong>Note</strong>: certbot needs port <code>80</code> of the server to be open, if it's closed, open it after running the command and close it again.</p>
<p>Following is the command to open using ufw</p>
<pre><code class="lang-bash">sudo ufw allow 80
</code></pre>
<p>On servers, for best security, it is recommended to always use SSL and keep port 80 closed. However, this depends on the services you're running.</p>
<p>That's it, now you have your own self-hosted Appwrite up and running using NGINX as a reverse proxy.</p>
]]></content:encoded></item><item><title><![CDATA[Appwrite on Google Cloud]]></title><description><![CDATA[Appwrite is a self-hosted backend-as-a-service platform that provides developers with all the core APIs required to build any application.
Appwrite is a great one-stop solution for various things. I personally like Appwrite because of the self-hostin...]]></description><link>https://blog.wilfredalmeida.com/appwrite-on-google-cloud</link><guid isPermaLink="true">https://blog.wilfredalmeida.com/appwrite-on-google-cloud</guid><category><![CDATA[BlogsWithCC]]></category><category><![CDATA[Appwrite]]></category><category><![CDATA[google cloud]]></category><category><![CDATA[Docker]]></category><category><![CDATA[WeMakeDevs]]></category><dc:creator><![CDATA[Wilfred Almeida]]></dc:creator><pubDate>Thu, 01 Dec 2022 14:28:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1669900149668/hVmjneuTx.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://appwrite.io/">Appwrite</a> is a self-hosted backend-as-a-service platform that provides developers with all the core APIs required to build any application.</p>
<p>Appwrite is a great one-stop solution for various things. I personally like Appwrite because of the self-hosting feature it provides. All it takes is 1 docker command and you get a complete backend system up and running in minutes.</p>
<p>In this post, I've discussed how to deploy Appwrite on Google Cloud.</p>
<h3 id="heading-creating-a-compute-engine-vm">Creating a Compute Engine VM</h3>
<ol>
<li>Navigate to the Compute Engine Dashboard</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669890959124/u35B3tUOv.png" alt /></p>
<ol>
<li><p>Click on Create Instance</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669891036238/yQbHf9OHR.png" alt /></p>
</li>
<li><p>Creating Instance</p>
<p>Appwrite <a target="_blank" href="https://appwrite.io/docs/installation#:~:text=The%20minimum%20requirements%20to%20run,operating%20system%20that%20supports%20Docker.">docs</a> recommend using a machine with at least <strong>1 CPU core</strong> and <strong>2GB of RAM.</strong></p>
<p>However, I tried using the <code>f1-micro</code> machine type from the <code>N1</code> series which has <code>1vCPU and 614MB memory</code>, but the deployment took over 25 minutes to get online so it's not recommended to use this machine type.</p>
<p>I ended up using the <code>n1-standard-1</code> machine type with <code>1vCPU and 3.75GB memory</code> which at the time of writing costs me US$25.27/month. It took 5mins for the containers to initialize and get online after all config was provided.</p>
<p>While creating the instance, there's an option of deploying a container image to the VM instance, I specified the Appwrite image from <a target="_blank" href="https://hub.docker.com/u/appwrite">DockerHub</a> but after the VM was created, the image wasn't deployed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669895261978/EroBDrKNZ.png" alt /></p>
<p>tl;dr here's the shell command for creating the instance.</p>
<p><strong>PS:</strong> If it gives an error, you can try all config in the browser console and copy the command and paste it into the cloud shell. Just scroll to the end and you'll see a copy command option.</p>
<pre><code class="lang-bash">gcloud compute instances create instance-1 --project={{PROJECT_ID_HERE}} --zone=us-central1<span class="hljs-_">-a</span> --machine-type=n1-standard-1 --network-interface=network-tier=PREMIUM,subnet=default --maintenance-policy=MIGRATE --provisioning-model=STANDARD --service-account=230991838846-compute@developer.gserviceaccount.com --scopes=https://www.googleapis.com/auth/cloud-platform --tags=http-server,https-server --create-disk=auto-delete=yes,boot=yes,device-name=instance-1,image=projects/debian-cloud/global/images/debian-11-bullseye-v20221102,mode=rw,size=10,<span class="hljs-built_in">type</span>=projects/{{PROJECT_ID_HERE}}/zones/us-central1<span class="hljs-_">-a</span>/diskTypes/pd-balanced --no-shielded-secure-boot --shielded-vtpm --shielded-integrity-monitoring --reservation-affinity=any
</code></pre>
<p>Following is all the config I did for the VM</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669899682676/rzXPolWPC.png" alt />
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669899704601/_56tpnvLf.png" alt />
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669899719860/2CJaxv0a8.png" alt /></p>
</li>
<li><p>Connecting to the VM</p>
<p>The VM after creation will get an external IP address that will be used to access Appwrite from the web. To install Appwrite, we need to SSH into the VM. Google Cloud provides SSH from the browser. The option to connect via SSH is present in the VM dashboard</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669895993658/Z-iNmJAVX.png" alt /></p>
<p>After clicking, a new window will open with the SSH connection established and a terminal view available.</p>
<p>It is recommended to get updates using</p>
<pre><code class="lang-bash">sudo apt-get update
sudo apt-get upgrade
</code></pre>
</li>
</ol>
<h3 id="heading-installing-and-configuring-appwrite">Installing and Configuring Appwrite</h3>
<p>Now that the VM is created, Appwrite can be installed on it.</p>
<p>Appwrite installs as docker containers and docker is mandatorily required. The VM by default has Debian 11. Follow the official Docker <a target="_blank" href="https://docs.docker.com/engine/install/debian/">docs</a> to install docker. I installed it using the following command</p>
<pre><code class="lang-bash">sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
</code></pre>
<p><strong>Note:</strong> You might need to add the docker repository into apt. Refer to the <a target="_blank" href="https://docs.docker.com/engine/install/debian/#install-using-the-repository">docs</a>.</p>
<p>Finally, once docker is installed, Appwrite can be installed. It takes only 1 command to install it. It is recommended to read the Appwrite installation <a target="_blank" href="https://appwrite.io/docs/installation">docs</a>. To install Appwrite, run the following command</p>
<pre><code class="lang-bash">sudo docker run -it --rm \
    --volume /var/run/docker.sock:/var/run/docker.sock \
    --volume <span class="hljs-string">"<span class="hljs-subst">$(pwd)</span>"</span>/appwrite:/usr/src/code/appwrite:rw \
    --entrypoint=<span class="hljs-string">"install"</span> \
    appwrite/appwrite:1.1.1
</code></pre>
<p>While installing, some configuration options are asked, and I've kept everything default. Customize as per your needs.</p>
<p>Once the values are provided, it'll take some time and then Appwrite will be installed.</p>
<p>The Appwrite dashboard can be accessed using the external IP of the VM which can be found in the VM's dashboard on Google Cloud Console. Enter the IP in your browser and the console will load. A privacy warning might be shown due to the self-signed SSL certificate, it's fine, you can click Advanced-&gt;Proceed to unsafe and visit the console.</p>
<p>Appwrite installs an SSL certificate so all communication is secure and encrypted. The certificate is self-signed so your browser doesn't trust it so the warning is shown.</p>
<p>If you've changed the default ports for HTTP (80) and HTTPS (443) while installing then your console might not load.</p>
<p>That's all, you now have your self-hosted backend system up and running on Google Cloud.</p>
]]></content:encoded></item></channel></rss>