Programming

Automate your life!

With expertise in C++, Python, HTML, and various other languages, I specialize in firmware programming, website development, and custom solutions for any project. Whether it’s developing the GrowRight system’s embedded firmware, creating seamless integrations, or building responsive websites like Macroponics.com and Macroponics.cloud, I have the skills to bring your ideas to life.

From automating processes to designing intuitive user interfaces, I am capable of handling diverse programming challenges. My experience spans embedded systems, cloud solutions, and web development, making me well-equipped to tackle any project you have in mind.

Let me help automate your life and bring your ideas to reality!

This is a code snippet from custom EC Probe for the GrowRight.

On the left, you’ll find example code from the GrowRight system, showcasing my proficiency in firmware programming and attention to detail in developing embedded solutions.

Below, you can watch a demo video of the custom peristaltic pumps designed specifically for the GrowRight system. From the circuit boards to the firmware and GUI software, every aspect was custom engineered to work seamlessly together, demonstrating my expertise in hardware and software integration.

Shown below are screenshots from the web GUI that was created for the GrowRight system.

Zoomable Image
Zoomable Image
Zoomable Image

From my personal journey in developing software for the GrowRight system, where every challenge and triumph is captured in the photographs and screenshots above, I’ve come to realize that the true lessons of programming and cybersecurity are forged through direct experience.

The Ultimate Teacher: Hands-On Experience in Programming and Cybersecurity

While hundreds of tutorials and guides can provide a solid foundation, nothing compares to the lessons learned through direct, hands-on practice. Engaging in activities like pen testing and building your own systems not only reinforces theoretical knowledge but also exposes you to real-world challenges that no tutorial can fully simulate. By actively exploring vulnerabilities, experimenting with code, and confronting the unpredictable nature of cyber threats, you cultivate a deeper, more intuitive understanding of programming and cybersecurity. This experiential approach transforms abstract concepts into tangible skills, making you not just a passive learner, but an active architect of your own expertise.

Tutorials can serve as valuable jumping-off points for anyone new to the field, offering a structured introduction to complex topics. However, as many experienced professionals have found, these resources are just the beginning. This page is dedicated to the discussion of programming, cybersecurity, and related fields, and it represents a collaborative space where experienced practitioners and new learners come together.

By forming a close-knit community of programmers, hackers, cybersecurity professionals, and curious newcomers, we can more effectively share insights, best practices, and cutting-edge techniques. This collective approach not only helps disseminate the most useful information but also fosters an environment where practical experience, real-world problem solving, and continual learning can thrive. Whether you’re just starting out or looking to deepen your expertise, our community stands as a testament to the idea that while tutorials lay the groundwork, hands-on experience and mutual collaboration are the true teachers in the ever-evolving landscape of technology.

Black Hat vs. White Hat Hacking: Understanding the Ethical Divide

In the world of cybersecurity and programming, hacking is often misunderstood. The term “hacker” does not inherently imply criminal activity, it simply refers to someone who deeply understands computer systems and can manipulate them in ways that others may not anticipate. However, the intent behind hacking is what separates ethical hackers from malicious ones.

Black Hat Hacking refers to hacking done with malicious intent. Black hat hackers exploit vulnerabilities in systems for personal gain, whether through stealing data, causing disruption, or bypassing security measures for unethical purposes. Their actions can cause harm, violate privacy, and disrupt essential services.

White Hat Hacking, on the other hand, is the practice of using hacking skills for ethical and constructive purposes. White hat hackers, often known as “ethical hackers,” work to strengthen cybersecurity by identifying vulnerabilities before malicious hackers can exploit them. They follow legal and moral guidelines, ensuring that their work benefits individuals, organizations, and society as a whole.

The power of hacking, like any advanced technology, depends entirely on how it is used. That is why I have dedicated a separate page to an ethical framework for hacking and cybersecurity. This framework provides guidance on how to use this technology responsibly, ensuring that it serves as a force for good rather than destruction. Understanding the ethical responsibilities of programming and hacking is crucial, as these skills have the potential to shape the digital world for better or worse.

Latest Blog Posts on Programming
March 20, 2025Docker is a simple and easy to use program once you learn the basic command structure. Learning docker can be complicated so I have composed this article to serve as a guide for some of the commands I use the most often when I am setting up sites with docker. What is docker? Docker is a containerization platform that allows you to package applications and their dependencies into lightweight, portable containers. These containers run on any system that has Docker installed, eliminating the “works on my machine” problem. Unlike traditional virtual machines, Docker provides an efficient way to deploy, scale, and manage applications without the overhead of a full operating system for each instance. Docker vs. Virtual Machines (Hypervisors vs. Containers) To understand Docker, it’s helpful to compare it to traditional virtualization. A hypervisor, such as VirtualBox or VMware, creates and runs multiple virtual machines (VMs) on a host system. Each VM has its own full operating system, requiring significant resources. Docker, on the other hand, uses containers, which share the host OS kernel while isolating applications and dependencies. This makes containers much lighter and faster than VMs. Instead of booting an entire OS for each instance, Docker only runs the necessary processes, reducing resource consumption and startup times. FeatureVirtual Machines (Hypervisors)Docker (Containers)Boot TimeMinutesSecondsResource UsageHigh (full OS per VM)Low (shared OS kernel)IsolationFull OS isolationProcess-level isolationPortabilityLimited by OS compatibilityHighly portable (works on any system with Docker) Why Linux is Best for Docker Linux is the preferred OS for Docker due to its native container support, better performance, and higher stability. Unlike Windows or macOS, which require a virtual machine to run Docker, Linux supports containers directly, reducing overhead and improving efficiency. It also provides more control over system resources, networking, and storage configurations. Most cloud providers and production environments use Linux for Docker, making it the best choice for development and deployment. If you want a faster, more reliable Docker experience, Linux is the way to go. Installing Docker on Linux For this example I will be providing instructions for Debian/Ubuntu distributions of Linux. Step 1: Update Your System Before installing Docker, update your package list: sudo apt update && sudo apt upgrade -y Step 2: Install Required Dependencies sudo apt install -y ca-certificates curl gnupg Step 3: Add Docker’s Official GPG Key sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc > /dev/null sudo chmod a+r /etc/apt/keyrings/docker.asc Step 4: Add Docker’s Repository echo “deb [arch=$(dpkg –print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \ https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable” | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt update Step 5: Install Docker sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin Step 6: Start and Enable Docker sudo systemctl start docker sudo systemctl enable docker Step 7: Verify the Installation Run the following command to check if Docker is installed and running: docker –version Test with a simple container: sudo docker run hello-world docker compose VS docker run If you’re new to Docker, you might have seen people using long docker run commands to start containers. While this works, it’s not the best way to manage Docker applications, especially if you’re running multiple containers or want to easily share your setup with others. Instead, Docker provides a better method: YAML configuration files Here’s why using a YAML configuration files is a better practice: Easier Deployment – A single .yml file allows you to quickly set up and tear down containers without needing to remember long docker run commands. Better Maintainability – If you need to modify configurations, it’s easier to edit a .yml file than rewriting commands. Multi-Container Management – Docker Compose allows you to define multiple containers in one file and start them all at once. Portability – You can share a .yml file with others, allowing them to deploy the same environment easily. Persistent Storage & Networking – It makes it simple to configure volumes, networks, and dependencies between containers. Here is an example of a docker run command. This will install a nginx web server. docker run -d -p 8080:80 –name myweb nginx:latest in this example the -d means detached, this will make it so the logs do not run in the terminal you run the command into. The -p 8080:80 sets the internal port for the nginx web service to port 80 and assigns the external port to 8080. the –name tag makes it a little easier to identify the container when running commands like: docker ps the docker ps command will list all of the containers that are currently active. As previously mentioned, docker run can be great for testing and temporary setups, however, its generally better practice to create a yaml file for the container. Here is an example of a docker-compose.yml file that runs the same server with the same settings: services: web: image: nginx:latest ports: – “8080:80” restart: unless-stopped This .yml file will always restart if an error occurs within the container, however, if you reboot the host system it will not restart the container. If you want the container to always reboot, even after host system reboots. You can just change the restart tag to always. Your code would then look like: services: web: image: nginx:latest ports: – “8080:80” restart: always for the rest of this tutorial we will be using a yaml file. Why Use Docker Networks? By default, Docker containers cannot talk to each other unless they are in the same network. If you define a custom network in your docker-compose.yml file, containers can easily find and communicate with each other using their service names instead of IP addresses. Here is an example of a yaml file that utilizes docker networks: services: web: image: nginx:latest ports: – “8080:80” networks: – mynetwork database: image: mysql:latest environment: MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} networks: – mynetwork networks: mynetwork: driver: bridge Breaking It Down: The bridge driver is used to create an isolated virtual network. The networks tag at the bottom defines a custom network called mynetwork. Each service (web and database) is assigned to mynetwork, allowing them to communicate. You will also notice that I have changed the password to use a format that allows you to create a .env file with login credentials you can create a .env file by running this command in a terminal: sudo nano .env .env files are formatted as follows: MYSQL_ROOT_PASSWORD=examplepassword DB_USER=admin DB_NAME=mydatabase Key Benefits of Using networks Better Security – Containers in the same network can talk, while others cannot. Easier Communication – Containers can reference each other by name instead of IP addresses. Scalability – You can add more services to the network without reconfiguring connections. Portability – The entire setup is defined in YAML and can be deployed anywhere. Using the networks tag in Docker Compose is a best practice for managing multi-container applications. Conclusion Docker is a powerful tool that simplifies application deployment by using lightweight containers instead of traditional virtual machines. We covered the basics of how Docker works, why Linux is the preferred environment, how to install it, and the advantages of using docker-compose.yml files for managing multi-container setups. By leveraging Docker, you can create scalable, portable, and easily deployable applications with minimal effort. Whether you’re hosting a website, setting up a database, or running a cloud service, Docker makes development and deployment more efficient. Thank you for reading! I hope this guide was helpful. If you have any questions or suggestions for improvement, feel free to leave a comment. Happy containerizing! […]