[{"content":"After finishing a comprehensive Docker course, I felt comfortable with container primitives and the security practices around it.\nBut tutorials only get you so far. To internalize these concepts and deepen my understanding, I decided to containerize my personal blog.\nMy blog is already built with Hugo and uses the Papermod theme. Before, I just relied on a standard GitHub Action that compiled the static site and deployed it directly to GitHub Pages and it worked perfectly fine.\nI wanted to transition from this static deployment to a fully containerized artifact. By packaging the blog into a Docker image and storing it to a Registry, I achieve a few things;\nEnvironment Agnosticism: The container runs exactly the same on my laptop or any other machine whether it\u0026rsquo;s my homelab server or in a cloud VPS. Immutability: Every version of my blog application is tagged and stored. Practice: I wanted to experiment with this application container fundamentals, starting from the Dockerfile multi-stage builds, security practices (non-root containers) to CI/CD automation. How I Implemented this 1. Dockerfile I created a Dockerfile at the root directory of my blog repository. To keep the final image lightweight and secure, I used a multi-stage build;\nFROM hugomods/hugo:debian-exts AS builder ARG HUGO_BASEURL=\u0026#34;http://localhost:8080/\u0026#34; WORKDIR /app COPY . . RUN hugo --minify -b ${HUGO_BASEURL} FROM nginxinc/nginx-unprivileged:alpine COPY default.conf /etc/nginx/conf.d/default.conf COPY --from=builder /app/public /usr/share/nginx/html EXPOSE 8080 CMD [\u0026#34;nginx\u0026#34;, \u0026#34;-g\u0026#34;, \u0026#34;daemon off;\u0026#34;] Stage 1: Uses the official hugomods/hugo:debian-exts image to compile the Markdown files into HTML by runnig hugo --minify. Stage 2: Uses an unprivileged Nginx image to serve the HTML content. This image drops root permissions and runs the container safely. 2. Nginx Configuration Hugo generates \u0026ldquo;pretty URLs\u0026rdquo; (where /posts/my-post/ maps to an index.html file) and custom 404.html pages. Out of the box, Nginx doesn\u0026rsquo;t know how to route these properly. I wrote a custom default.conf to handle the routing gracefully:\nserver { listen 8080; server_name localhost; root /usr/share/nginx/html; index index.html index.htm; # Handle Hugo\u0026#39;s pretty URLs location / { try_files $uri $uri/ =404; } error_page 404 /404.html; location = /404.html { internal; } } 3. CI/CD Pipeline I replaced my old GitHub Pages workflow with a Docker release pipeline. On every merge to main, or whenever I push a semantic version tag like v1.0.0, a GitHub Action spins up. It checks code, logs into GHCR, extracts metadata for tagging, builds and pushes the image.\nI also added a .dockerignore file, basically a .gitignore for the Docker daemon, which prevents bloated local directories like .git from slowing down the build context.\nWhen I was testing the image locally, I ran into a problem. Nginx was serving raw the HTML files without any styling and I found the issue. I never knew the Papermod theme was being tracked as a submodule. By default, the directories where submodules are attached are always empty when the repo is cloned. All these months I had an empty folder locally for the Papermod theme. My blog application worked fine since the GitHub Actions runner fetched the theme and deployed it to GitHub Pages;\nsteps: - name: Checkout uses: actions/checkout@v4 with: submodules: recursive A git clone command leaves submodule directories completely empty to save bandwidth. To get them instantly, you run git submodule update --init --recursive.\nReferences https://github.com/Diing54/blog ","permalink":"https://cloudiing.com/posts/dockerized-my-blog-setup/","summary":"\u003cp\u003eAfter finishing a comprehensive Docker course, I felt comfortable with container primitives and the security practices around it.\u003c/p\u003e\n\u003cp\u003eBut tutorials only get you so far. To internalize these concepts and deepen my understanding, I decided to containerize my personal blog.\u003c/p\u003e\n\u003cp\u003eMy blog is already built with \u003ca href=\"https://github.com/gohugoio/hugo\"\u003eHugo\u003c/a\u003e and uses the Papermod theme. Before, I just relied on a standard GitHub Action that compiled the static site and deployed it directly to GitHub Pages and it worked perfectly fine.\u003c/p\u003e","title":"Dockerized My Blog Setup"},{"content":"You\u0026rsquo;ve probably been running containers the wrong way like me a while ago but still don\u0026rsquo;t know it yet. Processes inside a container run as root user by default. Here is why this is a security risk and how to fix it.\nShared Kernel Unlike Virtual Machines, Docker containers share your host machine\u0026rsquo;s kernel. The root user inside your container is basically the same root user on your actual computer.\nIf an attacker manages to exploit a vulnerability in your containerized app and escapes the container, they land on your host system with root permissions.\nI have containerized a program called yt-dlp which is a command line tool that is used to download YouTube videos. Initially, I ran the container as root. This can be seen from the Dockerfile I used to build its image;\nFROM ubuntu:24.04 WORKDIR /mydir RUN apt-get update \u0026amp;\u0026amp; apt-get install -y curl python3 ffmpeg RUN curl -L https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp -o /usr/local/bin/yt-dlp RUN chmod a+x /usr/local/bin/yt-dlp ENTRYPOINT [\u0026#34;/usr/local/bin/yt-dlp\u0026#34;] If someone injects a bad code in one of yt-dlp packages, the blast radius includes my machine. A malicious process can modify files in my computer especially if there are bind mounts.\nThe Fix Instead of letting our container run as root by default, we need to bake an uprevileged user directly into the image. Here is the new Dockerfile;\nFROM ubuntu:24.04 WORKDIR /mydir RUN apt-get update \u0026amp;\u0026amp; apt-get install -y curl python3 ffmpeg RUN curl -L https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp -o /usr/local/bin/yt-dlp RUN chmod a+x /usr/local/bin/yt-dlp # 1. Create a standard, restricted user RUN useradd -m appuser # 2. Switch to the new user USER appuser # 3. Execute the app ENTRYPOINT [\u0026#34;/usr/local/bin/yt-dlp\u0026#34;] After the line USER appuser, all root privileges are dropped. Any command that follows, including ENTRYPOINT that actually runs the application, will be executed by the restricted appuser. References https://medium.com/@tanmayrane209/why-you-should-never-run-containers-as-root-07d1fa1127d5 ","permalink":"https://cloudiing.com/posts/stop-running-docker-containers-as-root/","summary":"\u003cp\u003eYou\u0026rsquo;ve probably been running containers the wrong way like me a while ago but still don\u0026rsquo;t know it yet. Processes inside a container run as \u003ccode\u003eroot\u003c/code\u003e user by default. Here is why this is a security risk and how to fix it.\u003c/p\u003e\n\u003ch2 id=\"shared-kernel\"\u003eShared Kernel\u003c/h2\u003e\n\u003cp\u003eUnlike Virtual Machines, Docker containers share your host machine\u0026rsquo;s kernel. The \u003ccode\u003eroot\u003c/code\u003e user inside your container is basically the same \u003ccode\u003eroot\u003c/code\u003e user on your actual computer.\u003c/p\u003e","title":"Stop Running Docker Containers as Root"},{"content":"2 days ago at NVIDIA GTC, NVIDIA introduced the next evolution of its upscaling and frame generation technology, DLSS 5. This faced immediate reaction and backlash from the gaming community. The community was divided. We even went at it at the Whatsapp groupchat with my friends. Critics labelled it as \u0026ldquo;AI Slop\u0026rdquo;, the rendering results showcased by NVIDIA looked \u0026ldquo;horrible\u0026rdquo;.\nFrom what I noted, this is just a deeper debate on Engineering vs Artistry\n\u0026ldquo;Developers will get lazy and will ship poorly optimized games hoping for DLSS to upscale it\u0026rdquo; - Yes this is a real concern but historically new tech proved to remove constraints. What the devs will decide to to with this freedom is the real variable. Combining their expertise with DLSS will unlock new heights. Cars didn\u0026rsquo;t make people lazy in walking but enabled faster mobility.\n\u0026ldquo;AI will eliminate the Artist\u0026rsquo;s intent\u0026rdquo; - DLSS will become another tool in the hands of Artists, this will make them enhance their creativity. Jensen even mentions that developers and artists will retain full control over how this AI technology is implemented.\nAI will not remove creativity, it will just be used as a leverage and it will speed up work\nI found this post on X by a game developer and this is what he had to say about DLSS 5, here\nThis is a shift in the industry from traditional 3D graphics to combining it with neural rendering. AI reconstruction of frames will be easier and more accurate in future and this will be very efficient than brute-forcing raw frames. So how the frames are generated will not matter, what matters here is how the users will perceive the final quality. The demos from NVIDIA looked off but people are forgetting that this tech is still at infancy. This will be the future of Gaming.\nReferences https://www.youtube.com/live/jw_o0xr8MWU?si=tTgv8rSlWpUD9ao- https://prosettings.net/blog/upscaling-technologies-explained/ ","permalink":"https://cloudiing.com/posts/ai-in-gaming-isnt-slop/","summary":"\u003cp\u003e2 days ago at \u003ca href=\"https://www.youtube.com/live/jw_o0xr8MWU?si=tTgv8rSlWpUD9ao-\"\u003eNVIDIA GTC\u003c/a\u003e, NVIDIA introduced the next evolution of its \u003ca href=\"https://prosettings.net/blog/upscaling-technologies-explained/\"\u003eupscaling\u003c/a\u003e and frame generation technology, DLSS 5. This faced immediate reaction and backlash from the gaming community. The community was divided. We even went at it at the Whatsapp groupchat with my friends. Critics labelled it as \u0026ldquo;AI Slop\u0026rdquo;, the rendering results showcased by NVIDIA looked \u0026ldquo;horrible\u0026rdquo;.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eFrom what I noted, this is just a deeper debate on Engineering vs Artistry\u003c/p\u003e","title":"AI in Gaming Isn't Slop"},{"content":"I recently ran into an issue while trying to ssh into my Raspberry Pi which is a dedicated DNS server for my homelab. The dnsmasq server is configured to listen to the Pi\u0026rsquo;s IP for incoming DNS requests. If I restarted the service manually systemctl restart dnsmasq, it worked fine. But everytime I rebooted the Pi, the service failed to start. The Pi\u0026rsquo;s IP is mapped into the name prod.homelab and with the DNS server failing to start, the command ssh astro@prod.homelab could not work since the client machine could not resolve the name prod.homelab. I had to ssh using the IP Address.\nThe Problem Checking the status with systemctl status dnsmasq showed this error;\ndnsmasq: failed to create listening socket for 192.168.100.56: Cannot assign requested address FAILED to start up systemd[1]: dnsmasq.service: Failed with result \u0026#39;exit-code\u0026#39;. This was when I noticed the issue was about a race condition between the networking system and the dnsmasq service.\nMy config for the dnsmasq service used listen-address=192.168.100.56 and when the Pi boots, dnsmasq service starts immediately and tries to bind the IP address it has been instructed to. By this time, my Pi hasn\u0026rsquo;t yet been assigned an IP by the router (I configured Router-based static IP assignment) and the dnsmasq service crashes. The IP assignment was slower than the dnsmasq service.\nThe Fix Instead of binding to a specific IP address which I used before (listen-address=192.168.100.56), the solution was to bind the interface and use the bind-dynamic directive. This tells the dnsmasq service to start up successfully even if the network isn\u0026rsquo;t yet ready, and just wait for the interface to come online.\n# OLD (Caused the crash) # listen-address=127.0.0.1,192.168.100.56 # NEW (The Fix) # Listen on local machine and the WiFi adapter interface=lo interface=wlan0 bind-dynamic The service now starts successfully after a quick reboot.\nReferences ","permalink":"https://cloudiing.com/posts/fixing-my-dns-server-that-was-crashing-on-boot/","summary":"\u003cp\u003eI recently ran into an issue while trying to ssh into my Raspberry Pi which is a dedicated DNS server for my homelab. The dnsmasq server is configured to listen to the Pi\u0026rsquo;s IP for incoming DNS requests. If I restarted the service manually \u003ccode\u003esystemctl restart dnsmasq\u003c/code\u003e, it worked fine. But everytime I rebooted the Pi, the service failed to start. The Pi\u0026rsquo;s IP is mapped into the name prod.homelab and with the DNS server failing to start, the command \u003ccode\u003essh astro@prod.homelab\u003c/code\u003e could not work since the client machine could not resolve the name \u003ccode\u003eprod.homelab\u003c/code\u003e. I had to ssh using the IP Address.\u003c/p\u003e","title":"Fixing My Dns Server That Was Crashing on Boot"},{"content":"It has been a few days so far learning about containers and I\u0026rsquo;m fascinated by the advances they bring in the tech industry. Containers revolutionalized the development, testing and deployment of applications.\nBefore containers, teams would develop applications locally on their machines and ship the application to the operations team. Developers needed to specify the dependancies together with their versions and environment configurations for this application to run successfully on other machines e.g the production servers. This would sometimes be a lot of work writing all the requirements needed for a specific application to run and on the other end, the operations team would make mistakes in configuring the environment and the app fails to run. This would lead to conflicts between the two teams with the developer saying \u0026ldquo;It was working on my end\u0026rdquo;. With containers, teams utilize an image which is a portable package which has all the requirements required to run an application and it is spun up with a container.\nWhat is a Container In my own technical terms, a container is just a linux process with a set of kernel features applied to it. These kernel features are restrictions of this process such as what it can see, what it can touch and what system resources it can use. These restrictions are enforced by;\nNamespaces - What this process can see cgroups - Managing or limiting the resource usage of this process Filesystem Isolation - Its own filesystem which its child processes will reference to If we strip off these restrictions, we remain with just a chroot - a shell command that changes the apparent root directory of the current running process and its children.\nDocker just automates the development of a container and adds other features such images, portable packaging and a friendly interface for manipulating these containers.\nI came through this post on X and its more accurate in some senses\nDocker containers therefore bring a standardized portable package/image that includes everything needed to run an application including the application code.\nReferences Read more abouth containers and docker from my Kasten (https://diing54.github.io/devops/docs/containerization-fundamentals/) ","permalink":"https://cloudiing.com/posts/containerization/","summary":"\u003cp\u003eIt has been a few days so far learning about containers and I\u0026rsquo;m fascinated by the advances they bring in the tech industry. Containers revolutionalized the development, testing and deployment of applications.\u003c/p\u003e\n\u003cp\u003eBefore containers, teams would develop applications locally on their machines and ship the application to the operations team. Developers needed to specify the dependancies together with their versions and environment configurations for this application to run successfully on other machines e.g the production servers. This would sometimes be a lot of work writing all the requirements needed for a specific application to run and on the other end, the operations team would make mistakes in configuring the environment and the app fails to run. This would lead to conflicts between the two teams with the developer saying \u0026ldquo;It was working on my end\u0026rdquo;. With containers, teams utilize an image which is a portable package which has all the requirements required to run an application and it is spun up with a container.\u003c/p\u003e","title":"Containerization"},{"content":"I\u0026rsquo;ve always assumed HTTPS as \u0026ldquo;encryption\u0026rdquo; or \u0026ldquo;securing\u0026rdquo; data in a network like many of devopers out there but that\u0026rsquo;s just half of the story.\nHTTPS is actually two things combined:\nEncryption - Securing data so that other people can\u0026rsquo;t read it. Authentication - Proving that the intended server is really who it claims to be. This is the major part of HTTPS according to my experience when I just set up HTTPS in my homelab. Here is a simple high-level explanation on how HTTPS works.\nLets say I want to login to my facebook page on my browser, I will need to visit https://www.facebook.com/. The server that wants to serve the login page will need to prove that it is the actual facebook server and not someone pretending. The only way to prove this is by using a Certificate. Certificates are are signed and issued by Certificate Authorities that are trusted by most browsers by default. For a server to prove its identity, it shares back its Certificate back to the browser. The browser checks if this Certificate was signed by a trusted Certificate Authority,if yes, trust is earned and a secure tunnel to the server is established. I can now type in my email and password for my facebook page and this data will be encrypted and sent to the facebook servers securely.\nReferences https://diing54.github.io/devops/docs/httphttpsssltls-and-ca/ ","permalink":"https://cloudiing.com/posts/https/","summary":"\u003cp\u003eI\u0026rsquo;ve always assumed HTTPS as \u0026ldquo;encryption\u0026rdquo; or \u0026ldquo;securing\u0026rdquo; data in a network like many of devopers out there but that\u0026rsquo;s just half of the story.\u003c/p\u003e\n\u003cp\u003eHTTPS is actually two things combined:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eEncryption - Securing data so that other people can\u0026rsquo;t read it.\u003c/li\u003e\n\u003cli\u003eAuthentication - Proving that the intended server is really who it claims to be. This is the major part of HTTPS according to my experience when I just set up HTTPS in my homelab.\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eHere is a simple high-level explanation on how HTTPS works.\u003c/p\u003e","title":"Https"},{"content":"Its been a few days since I started a simple homelab with a raspberry PI and an old laptop lying around. The first objectives of this project have been practicing networking fundamentals such as IP addresses, subnets, DNS, and security practices such as firewalls and ssh hardening. Other concepts/experiments will build on these foundations as the homelab advances into complex stages of learning cloud native technologies.\nThe homelab is currently hosted on github with proper documentation of what I\u0026rsquo;ve been doing in a clear structured way. I believe this is a great way of learning than endless watching of tutorials on YouTube without actions. I get my hands dirty here by experimenting different approaches and analysing the \u0026ldquo;whys\u0026rdquo;.\nReferences https://github.com/Diing54/homelab ","permalink":"https://cloudiing.com/posts/homelab/","summary":"\u003cp\u003eIts been a few days since I started a simple homelab with a raspberry PI and an old laptop lying around. The first objectives of this project have been practicing networking fundamentals such as IP addresses, subnets, DNS, and security practices such as firewalls and ssh hardening. Other concepts/experiments will build on these foundations as the homelab advances into complex stages of learning cloud native technologies.\u003c/p\u003e\n\u003cp\u003eThe homelab is currently hosted on github with proper documentation of what I\u0026rsquo;ve been doing in a clear structured way. I believe this is a great way of learning than endless watching of tutorials on YouTube without actions. I get my hands dirty here by experimenting different approaches and analysing the \u0026ldquo;whys\u0026rdquo;.\u003c/p\u003e","title":"Homelab"},{"content":"This afternoon as I was taking a break, I came across a good youtube video here explaining what exactly a server is.\n\u0026ldquo;A server is not just a physical computer, it\u0026rsquo;s a role that a computer takes; because any ordinary desktop computer can be set up as a server, and it doesn\u0026rsquo;t necessarily have to be a powerful computer.\u0026rdquo; - That\u0026rsquo;s a bar\nReferences ","permalink":"https://cloudiing.com/posts/a-server/","summary":"\u003cp\u003eThis afternoon as I was taking a break, I came across a good youtube video \u003ca href=\"https://youtu.be/UjCDWCeHCzY?si=CSkhZHpJHWa-IunQ\"\u003ehere\u003c/a\u003e explaining what exactly a server is.\u003c/p\u003e\n\u003cp\u003e\u0026ldquo;A server is not just a physical computer, it\u0026rsquo;s a role that a computer takes; because any ordinary desktop computer can be set up as a server, and it doesn\u0026rsquo;t necessarily have to be a powerful computer.\u0026rdquo; - That\u0026rsquo;s a bar\u003c/p\u003e\n\u003ch3 id=\"references\"\u003eReferences\u003c/h3\u003e","title":"A Server"},{"content":"Finally, I have decided to render my DevOps notes on a site powered by Hugo and Hextra and deployed it using github pages. This is what I call My Kasten, a German word meaning a crate or a container. It will act as my Second Brain where I will document everything I will be learning.\nThe purpose of this is that I believe it will help me in the future or anyone else who will find it useful to refer to something.\nReferences https://diing54.github.io/devops/ ","permalink":"https://cloudiing.com/posts/migrated-my-devops-notes-to-hugo-and-hextra/","summary":"\u003cp\u003eFinally, I have decided to render my DevOps notes on a site powered by Hugo and Hextra and deployed it using github pages. This is what I call My Kasten, a German word meaning a crate or a container. It will act as my Second Brain where I will document everything I will be learning.\u003c/p\u003e\n\u003cp\u003eThe purpose of this is that I believe it will help me in the future or anyone else who will find it useful to refer to something.\u003c/p\u003e","title":"Migrated My Devops Notes to Hugo and Hextra"},{"content":"Recently, I crossed a major milestone, I landed my first freelance contract for a local startup NGO doing good work. They needed digital presence. I built them a site and integrated a Content Management System that the admin can use to draft content they want to display on the website and the site builds and deploys automatically\nReferences ","permalink":"https://cloudiing.com/posts/my-first-contract/","summary":"\u003cp\u003eRecently, I crossed a major milestone, I landed my first freelance contract for a local startup NGO doing good work. They needed digital presence.\nI built them a site and integrated a Content Management System that the admin can use to draft content they want to display on the website and the site builds and deploys automatically\u003c/p\u003e\n\u003ch3 id=\"references\"\u003eReferences\u003c/h3\u003e","title":"My First Contract"},{"content":"Today I learnt something worth publishing here. As developers we write code day by day and it is executed by our machines. But how does our computers understand this code and execute it ?\nI am going to walk through the four core changes of a GCC compiler to transform a simple C program into a state that a machine can understand and execute.\nI created a simple file main.c below:\n#include \u0026lt;stdio.h\u0026gt; int main() { int a = 10; int b = 5; int result = a + b; printf(\u0026#34;The result is: %d\\n\u0026#34;, result); return 0; } Step 1: Invoking the Preprocessor This is the first stage. The preprocessor finds this line, #include \u0026lt;stdio.h\u0026gt;, in order to extract the stdio.h header file and paste its entire contents into our source code main.c\nWe run the command gcc -E main.c -o main.i The -E flag tells the GCC to stop after preprocessing -o main.i specifies the output file name. The .i extension is the standard for preprocessed C files Step 2: Compile to Assembly Code We take the preprocessed file and compile it to assembly language, a low-level, but still human-readable.\nWe run the command gcc -S main.i -o main.s The -S flag tells the compiler/GCC to stop after compiling to assembly The output is main.s with assembly code inside Step 3: Assemble to Object Code The assembler\u0026rsquo;s job is to convert the human-readable assembly code into pure machine code (binary)\nWe run the command gcc -c main.s -o main.o The -c flag tells the compiler to stop after the assembly stage The output is the object file main.o which is not yet executable Step 4: Link to Create an Executable File Our object file main.o contains the machine code for our main function but it doesn\u0026rsquo;t contain the code for the printf function. The linker\u0026rsquo;s job is to find the printf code in the C standard library and combine it with our object file to create a runnable program\nWe run the command gcc main.o -o my_program The final executable file my_program is created In the terminal, we can now run our program\n./my_program\nThe output is:\nThe result is: 15\nReferences ","permalink":"https://cloudiing.com/posts/under-the-hood-of-a-gcc-compiler/","summary":"\u003cp\u003eToday I learnt something worth publishing here. As developers we write code day by day and it is executed by our machines. But how does our computers understand this code and execute it ?\u003c/p\u003e\n\u003cp\u003eI am going to walk through the four core changes of a GCC compiler to transform a simple C program into a state that a machine can understand and execute.\u003c/p\u003e\n\u003cp\u003eI created a simple file \u003ccode\u003emain.c\u003c/code\u003e below:\u003c/p\u003e","title":"Under the Hood of a Gcc Compiler"},{"content":"I have been working on my final year project lately. Today, I plan to export the trained model onto the raspberry pi AI camera hardware. The model size is 22 mbs and the AI camera can accommodate only 8 mbs. Luckily, there are quantization methods provided by ultralytics that will shrink the model without it losing too much accuracy.\nReferences ","permalink":"https://cloudiing.com/posts/final-year-project/","summary":"\u003cp\u003eI have been working on my final year project lately. Today, I plan to export  the trained model onto the raspberry pi AI camera hardware. The model size is 22 mbs and the AI camera can accommodate only 8 mbs. Luckily, there are quantization methods provided by ultralytics that will shrink the model without it losing too much accuracy.\u003c/p\u003e\n\u003ch3 id=\"references\"\u003eReferences\u003c/h3\u003e","title":"Final Year Project"},{"content":"Another day creating another automation, it may not be a complex one but still its an automation. Today I created another simple beautiful script that automates my blogging workflow. Instead of manually navigating to my Hugo directory and running commands, I can create a new post from anywhere in my system by just running the custom command \u0026ldquo;blog\u0026rdquo;.\nHow it works\nRun blog from the command line Enter your blog title The scripts automatically generates a file using Hugo\u0026rsquo;s archetype template. The file\u0026rsquo;s name is the title provided It then opens the file automatically in Neovim for editing The script:\n#!/bin/bash BLOG_DIR=\u0026#34;$HOME/blog\u0026#34; cd \u0026#34;$BLOG_DIR\u0026#34; read -p \u0026#34;Enter post title: \u0026#34; title filename=$(echo \u0026#34;$title\u0026#34; | tr \u0026#39;[:upper:]\u0026#39; \u0026#39;[:lower:]\u0026#39; \u0026#39; \u0026#39; \u0026#39;-\u0026#39; | sed \u0026#39;s/[^a-z0-9-]//g\u0026#39;) hugo new \u0026#34;posts/${filename}.md\u0026#34; nvim \u0026#34;content/posts/${filename}.md\u0026#34; References ","permalink":"https://cloudiing.com/posts/created-another-script/","summary":"\u003cp\u003eAnother day creating another automation, it may not be a complex one but still its an automation. Today I created another simple beautiful script that automates my blogging workflow. Instead of manually navigating to my Hugo directory and running commands, I can create a new post from anywhere in my system by just running the custom command \u0026ldquo;blog\u0026rdquo;.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eHow it works\u003c/strong\u003e\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eRun \u003ccode\u003eblog\u003c/code\u003e from the command line\u003c/li\u003e\n\u003cli\u003eEnter your blog title\u003c/li\u003e\n\u003cli\u003eThe scripts automatically generates a file using Hugo\u0026rsquo;s archetype template. The file\u0026rsquo;s name is the title provided\u003c/li\u003e\n\u003cli\u003eIt then opens the file automatically in Neovim for editing\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003e\u003cstrong\u003eThe script:\u003c/strong\u003e\u003c/p\u003e","title":"Created Another Script"},{"content":"I created a simple script for generating a new note for my devops learning notes. This will increase my productivity since I will not have to bother typing current dates and times of each and every note. The script works perfect as I only type \u0026ldquo;devops\u0026rdquo; on my command line in any directory and I\u0026rsquo;m prompted to enter the name of the file which will also be the title of the note.\nAll this is done by bash scripting and an existing template which will be used in generating all the notes.\nReferences ","permalink":"https://cloudiing.com/posts/script-for-devops-notes/","summary":"\u003cp\u003eI created a simple script for generating a new note for my devops learning notes. This will increase my productivity since I will not have to bother typing current dates and times of each and every note. The script works perfect as I only type \u0026ldquo;devops\u0026rdquo; on my command line in any directory and I\u0026rsquo;m prompted to enter the name of the file which will also be the title of the note.\u003c/p\u003e","title":"Script for Devops Notes"},{"content":"I\u0026rsquo;m excited because i just configured my blog set-up, powered by Hugo and Papermod. I plan to post as long as I can and I believe it will be worth it in the future.\n\u0026ldquo;Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven\u0026rsquo;t found it yet, keep looking. Don\u0026rsquo;t settle.\u0026rdquo;\n— Steve Jobs\nReferences ","permalink":"https://cloudiing.com/posts/first-official-post/","summary":"\u003cp\u003eI\u0026rsquo;m excited because i just configured my blog set-up, powered by Hugo and Papermod. I plan to post as long as I can and I believe it will be worth it in the future.\u003c/p\u003e\n\u003cp\u003e\u003cem\u003e\u0026ldquo;Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven\u0026rsquo;t found it yet, keep looking. Don\u0026rsquo;t settle.\u0026rdquo;\u003c/em\u003e\u003cbr\u003e\n\u003cstrong\u003e— Steve Jobs\u003c/strong\u003e\u003c/p\u003e","title":"First Official Post"}]