×

How I Built a Local AI Hub Using Free and Open Source Software on My Old Mac Mini

I’m going to tell you something that would have sounded absolutely insane five years ago: I’m running artificial intelligence on a computer the size of a lunch box, it works offline, my data never leaves my house, and it costs me nothing beyond the electricity to keep it running.

No monthly subscription. No API fees. No sending my private documents to some server farm in Virginia. Just me, a Mac Mini M1, and a free and open-source software called Ollama that has quietly become one of the most important pieces of software I’ve used — and I say that as someone who has been reviewing software on this site since 2007.

If you’ve been curious about running AI locally but thought you needed a $5,000 GPU rig and a computer science degree, this post is for you. I’m going to walk you through exactly how I set up my local AI hub, what I use it for, and why I think every tech enthusiast should consider doing the same.


Why Local AI? Why Now?

Let me give you some context. Like most people, I’ve been using cloud-based AI tools — ChatGPT, Claude, Gemini — and they’re incredible. But there are situations where sending your data to a cloud service isn’t ideal.

When I’m working on business documents for my offline ventures, I don’t necessarily want those financial projections living on someone else’s server. When I’m brainstorming ideas for my apps, I’d rather keep those early concepts private. When I’m processing data for my web projects, I want the flexibility to run queries without worrying about rate limits, usage caps, or monthly bills that scale with every prompt.

The privacy argument alone is compelling, but it’s not the only reason. Local AI is also fast — there’s no network latency, no waiting for servers, no “we’re experiencing high demand” messages. It works offline, which means I can use it on a plane, in a coffee shop with terrible Wi-Fi, or during one of those delightful Philippine internet outages that build character.

And perhaps most importantly for a guy who has been writing about free and open-source software for nearly two decades: local AI puts the power back in your hands. You own the hardware, you own the model weights, and there are no terms of service to violate. That’s the open-source philosophy I’ve been preaching since my Linux days, applied to the most transformative technology of our generation.


What You Need (Less Than You Think)

Here’s my setup:

Hardware: Mac Mini M1 (8GB Unified Memory)

That’s it. That’s the hardware. No dedicated GPU. No server rack. No liquid cooling. An old Mac Mini M1 — the base model with just 8GB of RAM — that I bought a few years ago and that sits quietly on my living room table consuming roughly the same power as a light bulb.

Now, let me be upfront: 8GB is the bare minimum for local AI. It’s not ideal. After macOS takes its share of memory (roughly 3-4GB for the operating system and background processes), you’re left with about 4-5GB of usable space for AI models. That means the popular 7B and 8B parameter models that most guides recommend are either too tight to run comfortably or will cause constant memory pressure and slowdowns on my machine. I learned this the hard way after watching my Mac Mini struggle and swap memory like it was reliving its Intel days.

But here’s the thing — you don’t need the biggest models to get genuinely useful results. The smaller models, in the 1B to 3.8B parameter range, run beautifully on 8GB machines. They’re fast, responsive, and for many everyday tasks, surprisingly capable. Are they as good as GPT-4 or Claude? Not even close. But for quick drafts, summarization, code snippets, brainstorming, and general Q&A, they get the job done without sending a single byte of your data to the cloud.

The secret sauce that makes even my base model Mac Mini viable is Apple Silicon’s unified memory architecture. Unlike traditional PCs where the CPU and GPU have separate memory pools, the M1’s unified memory means the GPU can directly access whatever RAM is available for AI inference. Even with just 8GB, the M1’s efficiency means small models can generate tokens at 30-60+ tokens per second — fast enough that responses feel nearly instant.

Could you do this on a Windows PC or a Linux machine? Absolutely. If you have a desktop with an NVIDIA GPU (even a used RTX 3060 for around $150), you’d get excellent performance with even bigger models. But for Mac users with older Apple Silicon hardware gathering dust, Ollama gives that machine a second life.


Minimum specs to get started:

Any Apple Silicon Mac (M1 or newer) with 8GB of RAM can run small models (1B-3.8B parameters). Think of these as quick, lightweight assistants good for summarization, simple coding help, and general Q&A. With 16GB, things get significantly better — you can comfortably run 7B-8B models at good speed and even some 14B models. With 32GB or more, you’re in serious territory — running models that rival cloud-based services for many tasks.

On the PC side, 16GB of system RAM plus a GPU with at least 8GB of VRAM is the sweet spot. More VRAM means bigger, better models.


Installing Ollama: Easier Than Installing Most Apps

Ollama is the foundation of my local AI setup. It’s a free and open-source tool that handles downloading, managing, and running large language models with absurd simplicity. If you can type a command into a terminal, you can run local AI.


Step 1: Install Ollama

On Mac, you have two options. The easiest is to download the app directly from ollama.com. (). Download the DMG, drag it to Applications, and launch. Done.

If you prefer Homebrew (and if you’re a developer, you probably do):

    brew install ollama

On Linux:

    curl -fsSL https://ollama.com/install.sh | sh

On Windows, simply download the installer from the Ollama website.


That’s the entire installation. No Python environment management. No dependency hell. No CUDA driver nightmares. It just works.


Step 2: Pull Your First Model

Open your terminal and type:

    ollama pull llama3.2:3b

This downloads Meta’s Llama 3.2 3B model — one of the best small open-source language models available and the sweet spot for 8GB machines. It’s about 2GB on disk and runs comfortably without choking your system.

If you want something even lighter to start with:

    ollama pull phi4-mini

Microsoft’s Phi-4 Mini (3.8B parameters) is another excellent choice for 8GB systems — strong instruction following and surprisingly good at code for its size.


Step 3: Start Chatting

    ollama run llama3.2:3b


That’s it. You now have a local AI assistant running entirely on your machine. Ask it questions, have it summarize text, help with code, draft emails — whatever you need. Type your prompt, get a response. No account required. No internet required after the initial download.


The first time I ran this and got a coherent, helpful response from a model running entirely on my Mac Mini, I had the same feeling I had back in 2007 when I first booted Ubuntu and realized an entire operating system could be free. That feeling of “wait, this actually works, and it’s free?” — that’s the open-source magic I’ve been chasing for nearly 20 years.


The Models I Actually Use on 8GB

Ollama gives you access to a growing library of models. Here are the ones that work well on my 8GB Mac Mini and what I use them for:

1. Llama 3.2 3B — My go-to daily driver. This is the model I reach for most often. For a 3B model, the quality is genuinely impressive — it handles summarization, drafting, general Q&A, and brainstorming surprisingly well. On my M1, it runs at roughly 30-50 tokens per second, which means responses feel nearly instant. It’s the perfect balance of quality and speed for an 8GB machine.

2. Phi-4 Mini (3.8B) — My coding companion. Microsoft’s Phi-4 Mini punches well above its weight for code generation and technical tasks. When I’m working on my iOS apps or web projects and need a quick SwiftUI snippet, JSON formatting help, or a debugging nudge, this model delivers at around 15-20 tokens per second. It won’t replace Claude for complex architecture decisions, but for quick code help during focused development sessions, it’s remarkably useful.

3. Gemma 2B — My speedster for trivial tasks. Google’s smallest Gemma model is ultra-lightweight and blazing fast. I use it for simple reformatting, quick translations, and tasks where I just need a fast answer and don’t care about nuance. Think of it as the Puppy Linux of language models — tiny, fast, and gets the basics done.

4. Llama 3.2 1B — My offline emergency model. At just around 1.3GB, this model loads almost instantly and runs so fast it feels like autocomplete. The quality is basic, but when I need something working on minimal resources or want to run alongside other applications without memory pressure, it’s there.


Here’s the honest truth about running local AI on 8GB: you’re operating within constraints. Multi-turn conversations get noticeably weaker after several back-and-forth exchanges because the limited memory means shorter context windows. Complex reasoning tasks will sometimes produce mediocre results. And you’ll occasionally notice responses that are clearly “smaller model quality” compared to what you get from cloud services.

But for single-turn tasks — summarize this, draft that, reformat this JSON, explain this concept, help me with this code snippet — these small models are fast, private, and genuinely useful. It’s like having a competent junior assistant who works for free and never sleeps.

To switch between models, I just run a different command. Different models for different jobs — just like how I used to keep different Linux distros for different purposes back in my distro-hopping days.


Adding a Proper Interface: Open WebUI

Running Ollama from the terminal is fine for quick tasks, but for extended sessions, it gets clunky. You lose chat history, you can’t easily compare models, and scrolling through terminal output isn’t exactly a delightful user experience.

Enter Open WebUI — a free, open-source web interface that connects to Ollama and gives you a ChatGPT-like experience running entirely on your local machine.

If you have Docker installed, the setup is one command:


docker run -d -p 3000:8080 \

  -v open-webui:/app/backend/data \

  --name open-webui --restart always \

  ghcr.io/open-webui/open-webui:ollama


Open your browser, go to `http://localhost:3000`, create an account (this is local — nobody else sees it), and you’re in. Every model you’ve pulled with Ollama automatically appears in the interface.

Open WebUI is where the magic really happens. You get persistent chat history so you can pick up conversations where you left off. You can switch models mid-conversation to compare outputs. There are system prompt templates, temperature controls, and per-chat configuration settings. You can upload documents and use RAG (Retrieval Augmented Generation) to ask questions about your own files — PDFs, text documents, code files. It even supports web search integration, image generation, and voice input.

The interface looks and feels remarkably similar to ChatGPT, except everything is running on your own hardware. No cloud. No subscription. No data leaving your network.

I access Open WebUI from my Apple devices like my MacBook, my iPhone, and my iPad — all pointing to the Mac Mini sitting quietly on my living room table. It’s like having a private ChatGPT server for my household.


My Actual Workflows

Let me get specific about how I use this setup in real life, because “run AI locally” sounds cool in theory but means nothing without practical application.

1. For my blog (this site). When I’m researching topics for TechSource, I’ll dump my raw notes into a chat, ask the local model to identify the most interesting angles, suggest outlines, or flag gaps in my research. The model doesn’t write the posts for me — my writing voice is my own — but it’s an incredibly useful brainstorming partner.

2. For my iOS apps. I use Phi-4 Mini for quick SwiftUI help, JSON formatting, and debugging. Having a coding assistant that responds in under a second with no internet dependency is genuinely useful during focused development sessions.

3. For my offline businesses. I process business documents, draft communications, and analyze data without any of that information touching a third-party server. This is the use case where local AI’s privacy advantage matters most.

4. For website automation. I’ve built an automated pipeline that scrapes information from various sources and publishes curated content to my niche site. Ollama plays a role in processing and formatting that data. Having this run locally means the pipeline works even if my internet connection is spotty.

5. For learning. I feed technical articles, documentation, and research papers into the RAG system and then have conversations with the content. It’s like having a study partner who has perfect recall of everything you’ve uploaded.


How Does Local AI on 8GB Compare to ChatGPT and Claude?

I’m going to be honest with you, because that’s what TechSource has always been about.

On an 8GB machine running 3B models, local AI handles roughly 60-70% of the simple tasks I’d otherwise use cloud AI for. Summarization, quick drafts, code snippets, reformatting, basic Q&A — the small models get these done fast and privately.

For the remaining 30-40% — complex multi-step reasoning, nuanced creative writing, deep code architecture analysis, long conversations that require extensive context, and tasks requiring broad world knowledge — cloud models like Claude and GPT-4 are in a completely different league. There’s no sugarcoating this. My 3B model running locally isn’t competing with a 400B+ parameter model running on a data center full of A100 GPUs. That would be like comparing my Raspberry Pi to a supercomputer.

But that’s not the point. My approach is hybrid: local for privacy-sensitive work, quick tasks, and offline use. Cloud for complex, high-stakes tasks where quality matters more than privacy. The two complement each other perfectly. And if I ever upgrade to a Mac with 16GB or more RAM, those 7B-8B models become available and the quality gap narrows significantly.


What This Costs

Let’s do the math, because this is one of my favorite parts.

My setup costs:

Mac Mini M1 8GB (already owned and started to gather dust in my drawer): $0 additional cost. If buying used today, base M1 Mac Minis go for roughly $250-350 on resale markets — they’ve depreciated significantly, which makes them incredible value for a dedicated local AI server.

Ollama: Free, open-source.

Open WebUI: Free, open-source.

All AI models: Free, open-source.

Electricity: My Mac Mini draws about 20-39 watts during AI inference. Running it 8 hours a day costs roughly $2-3 per month in electricity.

Total monthly cost: About $3.

For comparison, ChatGPT Plus is $20/month. Claude Pro is $20/month. Running API calls at scale can easily cost $50-100+ per month depending on usage.

Even with the limitations of 8GB, my local setup handles enough daily tasks to reduce my reliance on paid subscriptions. Over a year, that adds up to meaningful savings — while giving me unlimited usage, complete privacy, and offline capability.


Tips I’ve Learned the Hard Way

After months of running this setup daily on constrained hardware, here are some practical lessons:

1. RAM is king. No, seriously.  On 8GB, every megabyte counts. Close unnecessary applications before running models. Safari with 20 tabs open and Xcode running simultaneously will leave almost nothing for Ollama. I’ve learned to treat my AI sessions like focused work blocks — close everything else, then chat.

2. Smaller models, faster results.  Don’t try to squeeze a 7B model onto an 8GB machine. I tried. It technically loads, but the constant memory swapping makes it painfully slow and the system becomes unusable for anything else. Stick to 3B and under for a smooth experience. A fast 3B model that responds instantly is infinitely more useful than a struggling 7B model that takes 10 seconds per response while your fans sound like a jet engine.

3. The 60-70% rule. Your model file should be no more than 60-70% of your total available memory (after macOS takes its share). On 8GB, that means model files of about 2-3GB maximum. This leaves enough room for the operating system, the context window (KV cache), and Ollama’s overhead.

4. Set Ollama as a network service. By default, Ollama only accepts connections from the local machine. If you want other devices on your network to access it (like I do with my MacBook and iPad), set the environment variable `OLLAMA_HOST=0.0.0.0` to allow connections from your local network. Just don’t expose it to the internet without authentication.

5. Different models for different jobs. I keep three to four small models installed and use them contextually. Phi-4 Mini for code, Llama 3.2 3B for general tasks, and Gemma 2B for quick throwaway queries. Specialization matters, even at the small model tier.

6. Keep an eye on model updates. The open-source AI community moves incredibly fast. Small models are improving at a staggering rate — the best 3B model today is dramatically better than the best 3B model from even six months ago. Check Ollama’s library periodically for new models. Pulling an update is just `ollama pull model-name`.

7. Plan your upgrade path. If local AI clicks for you (and I think it will), the single best upgrade you can make is more RAM. A used Mac Mini M1 with 16GB runs 7B-8B models comfortably and the quality jump from 3B to 8B is enormous. Consider it the best investment in your local AI future.


The Bigger Picture: This Is the Open-Source Revolution, Again

I started this site in 2007 writing about Linux because I believed free and open-source software could change the world. It did — Linux now powers 100% of the world’s top 500 supercomputers, 77% of web servers, and roughly half of all cloud workloads.

Now I’m watching the same thing happen with AI. Open-source models like Llama, Mistral, Qwen, Phi, Gemma, and DeepSeek are making AI accessible to anyone with a decent computer. Tools like Ollama and Open WebUI are making it easy. The barriers are falling fast.

A few years ago, running a useful AI model required cloud infrastructure and enterprise budgets. Today, you can do it on an old Mac Mini with 8GB of RAM that costs less than a pair of sneakers on the secondhand market. That trajectory reminds me of the early days of Linux, when something that was once the domain of server rooms gradually became something anyone could run on their desktop.

The fact that I can run a functional AI assistant on the most basic Apple Silicon Mac — the cheapest, lowest-spec model they ever made with an M1 chip — tells you everything about where this technology is headed. If this is what’s possible on 8GB today, imagine what the next generation of small models will do on the same hardware a year from now.

If you’ve been reading TechSource since the Ubuntu days, you already understand why this matters. The same principles that made open-source software transformative — transparency, control, community, freedom — are now being applied to artificial intelligence. And just like with Linux, you don’t need anyone’s permission to get started.

Pull up a terminal. Install Ollama. Run your first model. Welcome to the revolution. It’s local, it’s private, it’s free, and it could talk to your Linux-powered robot soon :) 

For those of you who are curious, below is a photo of my old Mac Mini (named Murdoc) lying on my living room table, looking like a metal brick that does nothing:

Mac Mini Murdoc
Mac Mini (Murdoc)


— Jun


Continue reading →

Health Is Wealth: Why I Chose a Smartwatch Over a Rolex

A few years ago, a friend of mine bought a Rolex Submariner. It cost him roughly the same as a decent used car. He showed it to me with the kind of pride usually reserved for newborn babies and championship trophies. It was beautiful, I’ll admit. The weight of it, the way it caught the light, the satisfying click of the rotating bezel — there’s a reason people have been obsessed with luxury watches for centuries.

He then asked me what I was wearing on my wrist. I looked down at my Garmin Fenix strapped on like a chunky piece of tactical gear and said, “This thing told me my VO2 max dropped because I skipped leg day.”

He wasn’t impressed. But here’s the thing — I wasn’t trying to impress anyone. I was trying to stay alive and healthy. And between his $10,000+ timepiece and my sub-$1,000 smartwatch, one of us was getting real-time heart rate data, sleep quality scores, blood oxygen readings, training load analysis, and a gentle but firm nudge to stop sitting on the couch.

No disrespect to the Submariner. But it can’t do any of that. It sits there, looking gorgeous, being expensive, and telling you what time it is — which, let’s be honest, your phone already does for free.


Health Is Wealth (And I Mean That Literally)

I know “health is wealth” sounds like something your tita would embroider on a throw pillow. But after the last few years of my life, I don’t just believe it — I’ve lived it.

During my writing hiatus from this site, something shifted in me. I got serious about fitness. Not “I should probably walk more” serious. I mean genuinely, deeply, borderline-obsessively serious. I started running. Not the casual jog-around-the-block kind of running. The kind where you wake up at 4 AM, lace up in the dark, and question every life decision that led you to this moment — only to do it again the next day because you’re completely hooked.

Since my last post on this site, I have completed two full marathons and one ultramarathon.

Let me repeat that for the people in the back, because honestly, I still can’t believe it myself. An ultramarathon. That’s anything beyond the standard 42.195 kilometers of a regular marathon. My legs have covered distances that would make a GPS tracker file a formal complaint.

If you had told the 2019 version of me — the guy who wrote about smartwatches while sitting comfortably at his desk — that he would one day run beyond marathon distance, he would have laughed, closed his laptop, and gone back to reviewing Raspberry Pi accessories.

But here I am. And my smartwatches were there for every single kilometer.


My Smartwatch Journey: The Sequel

Long-time readers of TechSource might remember my 2019 article, The Essential Smartwatch: From Motorola MOTOACTV to Apple Watch, where I traced my wearable history from that bulky but lovable Motorola MOTOACTV ($300, shattered after one waist-high drop — rest in peace) to the original Pebble (great battery, terrible Bluetooth connection) to the Apple Watch Series 2 Nike+ that became my daily companion.

In that article, I wrote: *“I will probably stick to wearing smartwatches until my heart rate per minute goes zero.”*

Well, I’m still here, my heart rate is very much not zero (especially during hill repeats), and my smartwatch collection has evolved significantly since 2019. My current daily rotation consists of two watches that represent the best of two very different philosophies in wearable tech:

The Garmin Fenix is my dedicated running and outdoor watch. If the Apple Watch is a Swiss Army knife, the Garmin Fenix is a machete that also happens to have a heart rate sensor. It’s built for endurance athletes who need their watch to last longer than a weekend camping trip. Battery life? We’re talking weeks, not hours. During my ultramarathon, this thing tracked every step, every elevation change, every heart rate spike when I questioned why I voluntarily signed up for this — and it still had juice left at the finish line. The GPS accuracy is surgical. The training metrics (VO2 max, training load, recovery time, race predictor) have genuinely helped me become a better runner. It’s not the prettiest watch on the shelf, but when you’re 35 kilometers into a race and need to know if you’re about to bonk, aesthetics are the last thing on your mind.

The Apple Watch Ultra is my everyday smartwatch and my second running companion. Apple basically looked at the regular Apple Watch and said, “What if we made this, but for people who do extreme things?” The Ultra has the best display I’ve ever seen on a smartwatch — bright enough to read in direct Philippine sunlight, which is saying something. The health features are comprehensive: ECG, blood oxygen monitoring, sleep apnea detection, irregular heart rhythm notifications, and now high blood pressure alerts. The integration with my iPhone is seamless in a way that only Apple can pull off. I use it for notifications, calls, Apple Pay, music on my AirPods, meditation with the Breathe app, and yes — running. Its GPS has gotten remarkably accurate, and the battery life, while nowhere near the Garmin, has improved enough that I can get through a marathon without it dying on me.

Between the two, I’ve found the perfect combo. Garmin for serious training and races. Apple Watch Ultra for everything else and casual runs. It’s like having a pickup truck and a sedan — different tools for different jobs, both essential.


What Your Rolex Can’t Tell You

Let me be clear: I’m not here to trash luxury watches. They are works of art. The craftsmanship of a Patek Philippe or an Omega Speedmaster is genuinely awe-inspiring. The mechanical movements, the hand-finished components, the heritage — there’s a reason the luxury watch market is worth over $33 billion and growing. If you can afford one and it brings you joy, by all means, wear it proudly.

But let’s have an honest conversation about value.

A Rolex Submariner costs anywhere from $9,000 to $15,000 depending on the model and availability (good luck getting one without a waitlist, by the way). For that price, you get an exquisitely crafted timepiece that tells you the time, the date, and how deep underwater you are. That’s essentially it. It looks incredible doing those three things, but functionally, that’s the extent of it.

Now consider what a quality smartwatch under $1,000 can do: continuous heart rate monitoring that can detect atrial fibrillation before you even feel symptoms. Blood oxygen readings that might catch respiratory issues early. Sleep tracking that reveals patterns you never knew existed. ECG readings right from your wrist. Training load analysis that prevents overtraining injuries. GPS tracking accurate enough for navigation in remote areas. Fall detection that automatically calls emergency services. Satellite SOS messaging when you’re off the grid. Blood pressure trend monitoring. Stress tracking with guided breathing exercises. And, oh yeah — it also tells you the time.

The smartwatch market has reached roughly 455 million users worldwide. There’s a reason for that. These aren’t just gadgets anymore. They’re health instruments that happen to go on your wrist.

I’ve read stories of people whose Apple Watch detected an irregular heartbeat and sent them to the doctor, where they discovered a serious cardiac condition they had no idea about. That’s not a hypothetical — it’s happening regularly enough that cardiologists are starting to take smartwatch data seriously. There are runners who caught early signs of overtraining syndrome because their Garmin showed declining HRV trends over weeks. There are people with sleep apnea who had no clue until their watch flagged it.

Your Rolex will never do any of that. It will sit beautifully on your wrist, hold its value, maybe even appreciate over time — but it will never tap you on the wrist and say, “Hey, your heart just did something weird. You should get that checked out.”


The Marathon Runner’s Perspective

Running marathons and an ultramarathon fundamentally changed how I think about what I wear on my wrist. When you’re training for distances that take your body to its absolute limit, data isn’t a luxury — it’s a necessity.

During my marathon training, my Garmin Fenix became my coach, my nutritionist’s assistant, and my ego-checker all in one. The training load feature told me when I was pushing too hard (often) and when I could push harder (rarely, because I was already pushing too hard). The recovery advisor gave me honest assessments of when I was ready for another hard session. The race predictor — while sometimes hilariously optimistic — gave me target paces to work toward.

During the actual races, having real-time data was invaluable. Heart rate zones kept me from going out too fast in the early kilometers (the number one mistake new marathoners make, and I speak from painful experience). Pace tracking helped me maintain consistency. And the GPS breadcrumb trail meant I always knew exactly where I was on the course, which is surprisingly reassuring when you’re deep into kilometer 38 and your brain starts suggesting that maybe you took a wrong turn and this road actually leads to nowhere.

My Apple Watch Ultra served double duty as my everyday health monitor. The sleep tracking helped me dial in my recovery during high-volume training weeks. The HRV trends gave me a general sense of whether my body was adapting or just surviving. And the heart health notifications gave me peace of mind that all this extreme exercise wasn’t secretly wrecking my cardiovascular system (spoiler: it wasn’t — running is good for you, in case you needed another reason).

Could I have run those marathons without a smartwatch? Of course. People ran marathons for decades before wearable tech existed. But would I have trained as efficiently, recovered as smartly, or avoided as many potential injuries? Absolutely not.


The Real Flex in 2026

There’s been a cultural shift happening, and I think it’s worth talking about. For decades, the ultimate wrist flex was a luxury mechanical watch. Wearing a Rolex or an AP Royal Oak signaled success, taste, and financial achievement. And to some extent, that’s still true in certain circles.

But increasingly, especially among younger professionals and the health-conscious crowd, the flex is shifting. Wearing a Garmin Fenix or an Apple Watch Ultra increasingly signals something different: that you take your health seriously, that you’re active, that you value function over fashion, and that you’re the kind of person who runs ultramarathons on weekends instead of just brunch.

I’m not saying one is better than the other as a status symbol. I’m saying the definition of “valuable” on your wrist is expanding. A $15,000 watch that holds its resale value is valuable in one sense. A $900 watch that catches a heart condition early or helps you train for a marathon without injury is valuable in a completely different — and arguably more important — sense.

Besides, luxury watchmakers are clearly paying attention. TAG Heuer has their Connected line. Louis Vuitton made a smartwatch. Even the traditional watch industry recognizes that people increasingly want their wrist wear to do more than look pretty and tick. The smartwatch market is projected to be worth over $218 billion by 2033. That’s not a fad. That’s a fundamental shift in what people expect from a timepiece.


My Wishlist for the Perfect Running Smartwatch

Since I’m a part-time tech blogger and it’s basically my civic duty to complain about things I want improved, here’s what I’m still waiting for:

1. Longer battery life on Apple Watch. 

The Ultra gets about 36-42 hours of normal use and roughly 14 hours with continuous GPS. For a marathon, that’s fine. For an ultra? You’re sweating — both literally and about battery percentage. Garmin’s weeks-long battery life puts Apple to shame here. The day Apple Watch hits even 5-day battery life, the Garmin might get nervous.

2. Better smartwatch features on Garmin.

 Garmin’s fitness tracking is world-class, but its smartwatch experience still feels like it’s from 2019. The app ecosystem is limited, notifications are basic, and the touchscreen responsiveness could use work. Garmin knows it’s a sports watch first and a smartwatch second, but closing that gap would make it unstoppable.

3. Non-invasive glucose monitoring. 

This is the holy grail of wearable health tech. Several companies are working on it, and rumors have circulated about both Apple and Samsung exploring this. For the millions of people managing diabetes — and for athletes who want to optimize fueling during endurance events — real-time glucose data on the wrist would be revolutionary.

4. Better integration between watch ecosystems. 

I run two watches because neither does everything perfectly. In a dream world, the Garmin’s battery life and training metrics would merge with the Apple Watch’s smart features and health sensors into one device. Until then, I’ll keep looking like the tech equivalent of someone who carries two phones.


So, Should You Buy a Rolex or a Smartwatch?

If you have the budget for a Rolex and you genuinely love horology, buy the Rolex. Life’s too short to not enjoy beautiful things, and a well-made mechanical watch is undeniably a work of art. Just know what you’re getting: a gorgeous conversation starter that tells time.

But if you’re asking me what’s more *valuable* — as in, what provides more tangible benefit to your actual life — the answer is the smartwatch, and it’s not even close. For under $1,000, you get a personal health monitor, fitness coach, communication device, navigation tool, and potential life-saver strapped to your wrist. That’s not marketing hype. That’s what these devices actually do, every single day.

Health is wealth. I didn’t fully understand that until I started running seriously, started pushing my body to its limits, and started relying on the data from my wrist to do it safely and effectively. My Garmin Fenix and Apple Watch Ultra have been with me through training runs at dawn, marathon finish lines, and one very long ultramarathon that I’m still not entirely sure I completed voluntarily.

My traditional watches? They’re still in my closet. Right where I left them in 2011 when I bought my first MOTOACTV. They look nice. They don’t do anything.

In the old article, I wrote that I’d stick to wearing smartwatches until my heart rate hits zero. After two marathons and an ultra, that statement is even more true today. Although, if my smartwatch has anything to say about it, that heart rate is going to stay well above zero for a very long time.

Now if you’ll excuse me, I have a training run to get to. My Garmin is already judging me for sitting this long.


— Jun

Continue reading →

The State of the Linux Desktop in 2026: A Love Letter from a Prodigal Penguin

Let me start with a confession. I haven’t used Linux as my daily desktop operating system in roughly a decade.

I know. Take a moment. Breathe. For those of you who have been reading TechSource since the Ubuntu and Compiz days, that sentence may stung. This is, after all, the same site that published 587 posts tagged “linux” — from distro reviews and desktop customization showcases to that infamous Distrowar series where I played judge and jury as two distributions fought for supremacy like gladiators in a nerdy arena. I reviewed Linux Mint when it was called Cassandra. I compared Ubuntu to Windows 8 and declared the pangolin the winner. I wrote about why the Linux desktop was “not winning” back in 2011. I showcased 20 awesome Linux desktop customization screenshots that made Digg’s front page. I even ran Linux on my MacBook Pro, because I enjoyed chaos.

And then, somewhere along the way, I drifted. iOS app development pulled me deep into the Apple ecosystem. My MacBook became my workhorse. Xcode replaced my terminal. Swift replaced Python as my go-to language. And before I knew it, the guy who used to argue passionately about GNOME vs. KDE was now debating whether to use SwiftUI or UIKit.

So here I am in 2026, looking at the Linux desktop landscape after years of being away, and I have to say — I barely recognize it. In the best possible way.


What I Missed (And It’s a Lot)

The Linux desktop world I left behind was one where we were fighting for basic hardware compatibility, where gaming meant Wine hacks and prayer, where Wayland was a distant promise, and where the “Year of the Linux Desktop” was the eternal running joke that never stopped being funny because it never stopped being true.

Let me walk you through what changed while I was busy wrestling with Auto Layout constraints and App Store review guidelines.

1. The market share moved a lot

This is the big one. When I was actively blogging about Linux, desktop market share hovered stubbornly around 1-2%. Today? Linux sits at roughly 4.7% globally as of 2025, and in the United States it crossed the 5% mark for the first time in June 2025. India is leading the charge at over 16%. These numbers might look small compared to Windows, but for those of us who remember the days when Linux barely registered on the charts, this is genuinely remarkable. That represents a 70% increase in three years. The penguin isn’t  just surviving anymore — it’s gaining massive ground.

2. Windows 10 hit end of life

Microsoft officially ended mainstream support for Windows 10 on October 14, 2025. This is huge for Linux because Windows 11’s hardware requirements (TPM 2.0, Secure Boot, specific CPU families) mean millions of perfectly functional computers suddenly can’t run the latest Windows. The choice became stark: buy new hardware, pay for Microsoft’s Extended Security Updates bridge, or install Linux. Campaigns like endof10.org popped up encouraging people to install Linux instead of throwing away working PCs. The environmental and economic argument for Linux has never been stronger.

3. Gaming on Linux went from joke to legitimate

If you told me in 2011 that a handheld gaming device running Linux would sell millions of units and fundamentally change how the industry thinks about Linux gaming, I would have assumed you’d been spending too much time in the Compiz settings. But that’s what Valve’s Steam Deck did. Running SteamOS (which is like Arch Linux wearing a nice suit), the Steam Deck proved that Linux could be a consumer gaming platform. Valve’s Proton compatibility layer now makes roughly 90% of Windows games playable on Linux. The latest Proton 10.0 is fixing games from Diablo 4 to God of War: Ragnarok on the Deck. At CES 2026, Lenovo announced a Legion Go 2 “Powered by SteamOS.” Other OEMs are following. Linux gaming isn’t just a niche hobby anymore — it’s a legitimate platform that publishers have to take seriously.

4. Wayland finally won

Remember when Wayland was that “next-generation display server” that everyone talked about but nobody used? Now it’s here, and it’s taking over. Ubuntu has been defaulting to Wayland since 2021, and as of Ubuntu 25.10, the X11 session has been removed for GNOME. The upcoming Ubuntu 26.04 LTS shipping with GNOME 50 will be Wayland-native with X11 support gone from core components. GNOME 50 is removing the X11 backend. The result? Better HiDPI support, less screen tearing, improved security, smoother fractional scaling, and the groundwork for features like HDR. Canonical is even working on improving NVIDIA Wayland performance for the next LTS release. For those of us who spent years dealing with X11 quirks, this transition feels historic.

5. Ubuntu is getting rewritten in Rust

Ubuntu 25.10 replaced the classic `sudo` command with `sudo-rs`, a Rust reimplementation designed to eliminate memory safety bugs that have plagued C-based tools for decades. Core command-line utilities like `ls`, `cp`, and `mv` are getting Rust-based replacements. For majority of users, the change is invisible — everything works the same, but the underlying security is a lot stronger. It’s the boring-but-brilliant improvement that makes the whole ecosystem better.

6. The desktop environments matured beautifully

 GNOME has evolved into a polished, cohesive desktop experience. KDE Plasma has become arguably the most customizable and feature-rich desktop environment on any platform. Linux Mint’s Cinnamon desktop keeps getting better for people who want a traditional Windows-like experience. And there are now even more options — Budgie is transitioning to Wayland with a lightweight wlroots-based compositor, and Fedora, openSUSE, and Pop!_OS all offer compelling desktop experiences. The fragmentation that I once wrote about as Linux’s biggest weakness has, in many ways, become its greatest strength. There is something for everyone now.

7. Governments are switching

Germany’s state of Schleswig-Holstein became the first European region to replace Microsoft tools with Linux and LibreOffice in public offices. France runs over 103,000 computers on GendBuntu, a custom Ubuntu distribution. Denmark announced a transition from Microsoft to open-source platforms. The EU is even considering an “EU-Linux” operating system for public administrations. Switzerland committed $231 million to build a national cloud service and mandated that government-developed software be released as open source. When governments start moving, the enterprise follows.


My History with the Linux Desktop

Reading through my old posts while preparing this article was a trip down memory lane that felt equal parts nostalgic and embarrassing. The internet never forgets, and neither does the Wayback Machine.

I started using Linux somewhere around 2005-2006, back when Ubuntu was young, brown-themed, and revolutionary because it shipped you free CDs in the mail. My first serious distro was Ubuntu Hoary Hedgehog (5.04), and I remember being very impressed that an operating system could be this customizable, this fast, and most importantly, this free.

From there, I became what the community affectionately calls a “distro hopper.” I tried everything. Ubuntu, Kubuntu, Xubuntu, Linux Mint, Fedora, openSUSE, PCLinuxOS, Mandriva, Arch, Debian, Puppy Linux, Slackware-based distros like Wolvix and NimbleX, and even oddities like SliTaz (the smallest desktop distro I’d ever seen at less than 30MB). I reviewed them, compared them, pitted them against each other in my Distrowar series, and argued about them in comment sections that sometimes ran into hundreds of passionate replies.

I wrote about why the Linux desktop wasn’t winning (it was the ADHD-like lack of focus, I argued). I wrote about how dark mode on macOS was something Linux had done years earlier (because we had). I compiled lists of awesome desktop customization screenshots that proved Linux could look stunning. I tested lightweight desktop environments that most people had never heard of, from EDE to Project Looking Glass to XFast. I even wrote about the “anatomy of a crappy Linux distro” — twelve signs that a distribution was garbage — and it became one of our most popular and controversial posts.

Those years of distro hopping and writing about Linux taught me more about computing than any formal education ever could. I learned about partitioning, bootloaders, kernel modules, package management, networking, scripting, and the art of troubleshooting hardware that refused to cooperate. More than the technical skills, Linux taught me about community, about building something collectively, and about the power of open source as a philosophy.

Then life happened. I got into iOS development around 2013, and macOS became my daily driver out of necessity. The irony of a former Linux evangelist becoming an Apple developer isn’t lost on me. Trust me, I’ve heard the jokes.


What Coming Back Feels Like

Looking at the Linux desktop today as someone who’s been away feels like visiting your hometown after a decade and finding that the scrappy neighborhood kid is now running for mayor. Everything is familiar yet dramatically improved.

The installation process, which I used to dedicate entire blog posts to explaining step by step, is now embarrassingly easy. Ubuntu’s installer is beautiful and streamlined. Linux Mint practically holds your hand. Even Fedora, which used to have a learning curve, is smooth as butter. The days of praying your Wi-Fi card would be detected are mostly over (though I hear some edge cases still exist, because Linux wouldn’t be Linux without at least one driver surprise waiting to humble you).

The app ecosystem has transformed. Flatpak and Snap have solved the package fragmentation problem that plagued Linux for years. You want Spotify? One click. Slack? There. VS Code? No problem. The browser situation alone has improved dramatically — Chrome, Firefox, and Brave all run natively and beautifully. LibreOffice keeps getting better. GIMP is still GIMP (some things never change), but there are now alternatives like Krita that are world-class.

The developer experience on Linux is arguably better than any other platform. With native Docker support, first-class terminal environments, and the fact that your development environment matches your production servers, it makes a lot of sense. The Stack Overflow 2025 survey shows nearly 28% of developers using Ubuntu for personal use. On the server side, Linux is so dominant that it’s not even a competition anymore — it powers 100% of the world’s top 500 supercomputers, approximately 77% of web servers, and about 49% of global cloud workloads.


What to Look Forward To

The next few months are going to be exciting for the Linux desktop.

1. Ubuntu 26.04 LTS “Resolute Raccoon” arrives on April 23, 2026 

This is a big one — it ships with GNOME 50, which is fully Wayland-native with no X11 backend at all. New default apps include Showtime (replacing the aging Totem video player) and Resources (a modern system monitor). TPM-backed full-disk encryption gets expanded, with the ability to add or remove PINs after installation. The Security Center gets a redesigned interface. This LTS will be supported until 2031, extendable to 12 years with Ubuntu Pro, and is expected to be the release that millions of Windows 10 refugees will land on. When Ubuntu 26.04.1 drops in August 2026, Canonical enables direct upgrades from the previous LTS, which means a wave of 24.04 users will be making the jump.

2. GNOME 50 is removing X11 support entirely from Mutter and GNOME Shell

It’s also bringing session save/restore functionality (finally!), improved Nautilus performance, parental controls with screen time limits, and continued HDR work. The fractional scaling improvements alone should make high-resolution displays look significantly better.

3. Linux gaming continues its upward trajectory

Valve’s Steam Machine and Steam Frame are expected to arrive sometime in 2026, expanding the SteamOS ecosystem beyond handhelds. The Steam Deck 2 is rumored to be in development with a possible Zen 6 “Magnus” APU, though Valve is reportedly waiting for a meaningful generational leap rather than a minor spec bump. Meanwhile, Proton keeps getting better with each release, and more anti-cheat vendors are enabling Linux compatibility.

4. If current trends hold, Linux could reach 6% global desktop market share by late 2026

With Windows 10’s extended security updates expiring in October 2026, another wave of users will face the same upgrade-or-switch decision. More OEMs are shipping Linux-preloaded systems. Framework laptops work beautifully with Linux. System76 and Tuxedo continue building Linux-first hardware. The ecosystem for buying a computer that runs Linux out of the box has never been better.


Is It Finally the Year of the Linux Desktop?

You know what, I’m not going to say it. I’ve been writing about Linux long enough to know that declaring “the year of the Linux desktop” is the tech equivalent of saying “what could possibly go wrong” in a horror movie. Every time someone says it, the penguin gets delayed by another decade.

But here’s what I will say: it doesn’t matter. The “Year of the Linux Desktop” meme was always the wrong framing. Linux doesn’t need to beat Windows or macOS to be successful. It just needs to be a viable, well-supported option for people who want it. And in 2026, it absolutely is.

The desktop market share is at historic highs. Gaming works. Hardware compatibility is excellent. The major desktop environments are polished and mature. Governments and enterprises are adopting it. The app gap has closed. And the open-source community continues to build, improve, and iterate at a pace that no single corporation can match.

For those of you who have been using Linux all along while I was off building iOS apps — you held the line. The desktop you believed in when it was clunky, when hardware didn’t work, when people laughed at the very idea — that desktop is now genuinely, unironically excellent. You were right all along.

As for me? I’m not going to pretend I’m switching cold turkey from macOS tomorrow. I still need Xcode for my apps, and my workflow is deeply embedded in the Apple ecosystem. But I just ordered a Raspberry Pi 5 to set up a fresh Linux workstation (old habits die hard), and I’m eyeing the Ubuntu 26.04 LTS release. There might even be a proper distro review on TechSource again. Wouldn’t that be something?

The penguin and I have some catching up to do.


— Jun

Continue reading →

TechSource in the Age of AI

Hello (again, again) world! 

If you’re reading this, congratulations — you are either one of the most patient humans on the internet, or you accidentally stumbled here while googling “tech blogs that ghost their readers.” Either way, welcome. You are appreciated. 

To my loyal subscribers, followers, and random visitors who have this site bookmarked after all these years — I am deeply sorry for disappearing. Again. I know, I know. This is starting to feel like that friend who keeps saying “we should hang out soon” and then vanishes for four years. Except in my case, it’s been roughly that long since my last post.

For me, here’s something wild to think about (or to be grateful for): www.junauza.com will turn 20 years old next year. Two decades. This site has been online since 2007. To put that in perspective, when I wrote my first post, the iPhone had been announced, “cloud” was something in the sky, and people were debating whether blogs were a thing. I started this site when Twitter was a baby, Android didn’t exist yet, and Bitcoin was an idea brewing in Satoshi Nakamoto’s mysterious brain.

Twenty years. That’s older than most TikTok creators. Let that sink in. I am getting older. 


*What’s New Around Here?

If you’re a returning visitor, the first thing you’ll notice is the fresh new look. We did a full redesign — cleaner, simpler, and way more readable on mobile. No more cluttered sidebars, no more widgets from 2012 that load slower than a Windows Vista laptop - just clean content and a pleasant reading experience.

Oh, and the ads? Gone. Wiped out. Eliminated. We are now running an ad-free site. No popups ambushing you when you’re trying to read a paragraph. No auto-play video ads making your phone speaker blast some random product at full volume while you’re in a quiet coffee shop. None of that. This is now a pure, distraction-free zone.

You may have noticed the new title and description: Tech Source — persistent tech curiosity since 2007. I think that captures what this site has always been about. I’ve always been curious about technology, and that curiosity hasn’t faded one bit. If anything, it’s gotten worse. In a good way.


*Where Have I Been?

Great question. Let me give you the honest answer without writing an entire autobiography.


Offline Businesses

After I stopped posting, I spent a significant amount of time and energy on offline ventures. Running physical businesses is a whole different beast compared to managing a blog. There’s no “Ctrl+Z” in real life when things go wrong, and things go wrong a lot. But it’s been a rewarding learning experience — one that taught me patience, resilience, and the importance of knowing when to step away from the screen.

Health and Wellness

I made a conscious decision to invest more time in my physical and mental health. I got serious about fitness, cleaned up my diet, and started paying more attention to what my body was actually telling me instead of ignoring every signal like a human version of “dismiss all notifications.” Getting older has a way of reminding you that your body isn’t a machine — well, it is, but it’s the kind that needs regular maintenance, quality fuel, and the occasional software update.

Family Time 

I spent more quality time with my family, which is something I wouldn’t trade for any amount of site traffic or page views. Kids grow up fast. Like, terrifyingly fast. One moment you’re teaching them how to hold a spoon, and the next they’re explaining to you what a meme is.

Traveling

I also did a bit of traveling when I could. There’s something about visiting new places that recharges your creative battery in ways that no amount of coffee or YouTube tutorials can replicate. Seeing how technology is being adopted differently across various places gave me fresh perspectives that I’m excited to share with you.

iOS App Development

For those who’ve been following my journey, I’ve been deep in the trenches of iOS development. Building apps with SwiftUI, experimenting with different concepts for niche market, and losing sleep over Auto Layout constraints and App Store review guidelines. More on this in future posts — I’ve got stories, tips, and a few cautionary tales to share.


*Why Come Back Now?

Because we are living in the most exciting era of technology in human history, and I physically cannot keep all of this to myself anymore.

Think about it. When I last posted regularly, ChatGPT didn’t exist. Generative AI was an academic curiosity. Self-driving cars were a “someday” proposition. Bitcoin was fighting for legitimacy. Now? AI can write code, generate art, compose music, and have eerily intelligent conversations (hello from the other side). Electric vehicles are everywhere. Crypto has survived multiple “deaths” and keeps coming back like a villain in a Marvel movie. Humanoid robots are walking around like it’s the most normal thing in the world. We are living in the future and I want to write about it.


*The Road Ahead

Moving forward, my goal is to post at least once a week. No more year-long sabbaticals. No more disappearing acts. I’ve set the bar at weekly because I want to prioritize quality over quantity. Each post should either teach you something, make you think, or at least, not put you to sleep.


Here’s what you can expect from TechSource moving forward:

Artificial Intelligence — This is the big one. AI is reshaping everything from how we work to how we create to how we search the internet. I’ll be covering the latest developments, practical applications, tools worth trying, and the occasional existential crisis about whether our robot overlords are friendly or not.

Electric Vehicles — I’m fascinated by the EV revolution. From Tesla’s latest moves to what’s happening with BYD, Rivian, and the dozens of new players entering the market, there’s no shortage of things to talk about. Range anxiety is soo 2020.

Cryptocurrency and Blockchain — You may remember my posts about Bitcoin from way back. I ran a full Lightning node on a Raspberry Pi, wrote about the Bitcoin revolution, and geeked out about blockchain technology before it was cool. That enthusiasm hasn’t gone anywhere. Expect honest takes on crypto markets, DeFi developments, and blockchain projects that matter (and a few that don’t but are entertaining).

Biohacking and Health Tech — This is a personal passion of mine. The intersection of technology and human biology is producing some incredible breakthroughs. From wearables that track your sleep and HRV to supplements backed by science to longevity research that might help us all live longer and better — I want to explore all of it.

Gadgets and Hardware — Because most of us geeks get unreasonably excited about unboxing a new piece of tech. Smartphones, laptops, Raspberry Pi projects, smart home devices — if it has a chip in it and does something cool, it’s exciting.

Software and Tools — From productivity apps to development tools to open-source gems that deserve more attention. My Linux roots run deep, and my love for good software hasn’t changed.

Tech Startups — The startup world is wild right now, with AI lowering the barrier to entry for building products. I’ll be keeping an eye on interesting companies, innovative products, and founders who are building the future.

Sustainable Energy — Solar, wind, battery storage, nuclear fusion progress, and everything in between. The energy transition is one of the most important stories of our time, and it doesn’t get nearly enough attention in mainstream tech coverage.

Stock Market and Investing — I’m not a financial advisor and I won’t pretend to be one. But I do follow the markets, especially tech stocks, and I think there’s value in sharing observations, analysis, and the occasional “I can’t believe that just happened” moment. As always, do your own research.

My App Development Journey — I’ve been building iOS apps for a while now, and I want to share more about that journey. The wins, the frustrations, the bug that took three days to fix and turned out to be a missing comma. Real talk from the trenches of indie app development.

A Bit of Spirituality — Technology is amazing, but it can’t answer every question. I’ve found that maintaining some form of spiritual practice — whether it’s meditation, reflection, or just stepping away from the noise — is essential for staying grounded in a world that moves at the speed of a fiber optic cable. I’ll sprinkle in some thoughts on this from time to time.

Random Tech Musings — Sometimes I just have thoughts. About technology, about the internet, about why we still can’t get printers to work reliably in 2026. These will be the fun, unstructured posts where I riff on whatever’s on my mind.


*A Few Final Thoughts

This site has been through multiple redesigns, topic shifts, contributor changes, and extended hiatuses. But the core has always remained the same — a genuine curiosity about technology and a desire to share that curiosity with others.

I started TechSource as a young tech enthusiast from a small province in the Philippines, who wanted to write about Linux and open-source software. Nearly two decades later, I’m that same guy — with a broader set of interests, more life experiences, and much lesser hair ego.

The tech landscape has changed dramatically since 2007. But one thing that hasn’t changed is the excitement I feel when I discover something new, understand how something works, or find a piece of technology that genuinely makes life better. That excitement is what built this site, and it’s what will keep it going.

If you’re still around after all that — thank you. Whether you’ve been following since the Linux distro review days or you found this site five minutes ago, I appreciate you. Let’s make the next chapter of TechSource the best one yet.

Now if you’ll excuse me, I have about a hundred drafts to finish and a weekly posting schedule to keep.

See you next week (or year).


— Jun

Continue reading →

How to Easily Install a Full Bitcoin Lightning Node on a Raspberry Pi

I recently installed a full bitcoin node on our home network, and lucky for me, I got everything up and running quickly without bumping into some issues. Before I will show you the steps on how to install a full bitcoin node, allow me to explain some of my reasons why I ended up doing this. 

As some of you may already know, bitcoin is a network composed of thousands of nodes. A record of every bitcoin transaction is verified and maintained inside a node. So if you are running one, you will essentially be hosting and sharing a copy of the bitcoin blockchain and you will help maintain the network decentralized. 



What are the benefits of running a bitcoin node?

Unlike mining, you will not be rewarded with a bitcoin when running a node because you are simply giving support to the network instead of solving complex computational math problems. However, one of the main advantages of running your own node is that you can do some transactions on the Bitcoin network without the need for a third party provider thus allowing you to save money for the fees. For added peace of mind, you can connect your wallet and forward all your transactions through your own node, making sure that every transaction is safe and secure.


For me, another reason for running a node is for educational purpose and taking a deep dive on the blockchain technology. I am very passionate about this emerging tech because it is already shaping up to change the world for the better.


Without further ado, here are some of the steps that I have followed to easily install and run a bitcoin node:


Step 1: Prepare the hardware


You don’t need an expensive mining rig to run a bitcoin node. I bought the following items, but you can always use your existing hardware provided that you have all the recommended system specs:


1. Raspberry Pi Model 4 (Particularly, I bought the Model B with 4GB RAM starter kit that includes the power adapter, 16GB microSD card, and case)



2. 1 TB SSD (SanDisk SSD Plus 2.5” 1 TB SATA III Internal Solid State Drive)



3. SSD Enclosure (SENDA Transparent USB 3.0 SATA III 2.5 HDD/SDD Enclosure)



Note: I bought all the items at Lazada and the total cost is around 10,000 Philippine Pesos (200 USD).


Step 2: Download the software


Download Umbrel OS HERE and extract the file. Download Balena Etcher HERE and install it on your computer.


Note: For downloading the software, obviously you will need a laptop or desktop computer. A microSD card reader is needed for flashing the software to the microSD card. 


Step 3: Flashing Umbrel OS


Put the microSD card on your card reader, open Balena Etcher, and flash the downloaded Umbrel OS to your microSD card. After flashing, remove the card and insert it into the Raspberry Pie.


Step 4: Plug it up


Put the SSD drive into the enclosure and plug it into any of the blue colored ports (USB 3.0) of your Raspberry Pi. Connect the Raspberry Pi to your Internet connected router via ethernet cable. Connect the power supply and power up your Raspberry Pi. 


Step 5: Starting up


Around 5 minutes after powering up, Umbrel OS can be accessed at http://umbrel.local on the web browser of your device (smartphone, tablet, desktop or laptop) that is connected to the same network as the Raspberry Pi.  



Follow the initial set up of Umbrel and enjoy running your very own bitcoin node. 






I am still exploring some of the features of Umbrel and might write a quick review about it soon, so watch out!

Continue reading →

The Bitcoin Revolution is Here

Since 2014, I’ve been talking about bitcoin here (read: Is Bitcoin The Next Open-source Software Revolution?Best Bitcoin Applications for Linux). Back then, bitcoin was still very much in its infancy and our articles about it were some of the least popular posts we’ve ever had. However, I have already seen its potential and proclaimed that it could become a revolutionary open-source software project and that it has the potential to be bigger than Linux. 


Today, bitcoin and cryptocurrency in general have already gone mainstream in terms of popularity. Although widespread adoption could still be a few years away, different personalities like social media icons, hip hop moguls, top athletes, famous actors, financial gurus, and several billionaires are already talking about it incessantly. 


Speaking about widespread adoption, different countries have already started recognizing the value of cryptocurrency. In fact, one country has recently passed a law to make bitcoin its official currency. I believe more countries will follow after we will all be able to clearly see the positive economic impact of having a legal tender in bitcoin.


Recently, we have witnessed institutional investors or publicly traded companies that have started filling their balance sheets with bitcoins. To name a few, there’s Tesla  (invested around 1.5 billion dollars worth of bitcoin), Microstrategy (250 million dollars), Galaxy Digital Holdings (176 million dollars), and Square (50 million dollars). 


Although I am not a financial advisor and this site is not about making money, I encourage you to consider investing it bitcoin. Forgive me for not telling you this in 2014 when 1 bitcoin was equivalent to around 500 dollars. At that time, buying and selling cryptocurrency was difficult because there were very few trusted exchanges and wallets so the possibility of losing your investment was enormous. If ever you decide to invest in cryptocurrency today, I suggest that you do your own research first because, like all others investments, there are still risks involved, albeit much lesser than before. 


After promoting Linux and other free and open-source software in the past, I have decided from now on to focus most of my time here in writing about bitcoin, cryptocurrencies, and other interesting blockchain projects. I think it is about time to enlighten people that bitcoin is not purely a speculative asset, but something that is more valuable because of its capability to empower people from around the world. Like most of you, I find joy in freedom and for me bitcoin is freedom. Now, I can safely say that the cryptocurrency revolution is underway, and we are just getting started. 


Continue reading →

25 (More) Funny Computer Quotes

I have been reading some of my old posts here and noticed one that is still quite popular simply because a lot of us love humor. If you are a new site visitor, kindly check out "My Top 50 Funny Computer Quotes" post to know what I mean. Inspired by that one and since it’s been a long time that I wrote or posted some funny stuff here, I decided to collect a few more amusing quotes.


So without further delay, here is a brand new collection of funny computer quotes:
 

25. What if one day Google got deleted and we could not Google what happened to Google?

24. Never trust a computer you can’t throw out a window.

23. The attention span of a computer is only as long as its power cord

22. Microsoft has a new version out, Windows XP, which according to everybody is the ‘most reliable Windows ever.‘ To me, this is like saying that asparagus is ‘the most articulate vegetable ever.

21. Never trust anything that can think for itself if you can't see where it keeps its brain.

20. "Computers are useless. They can only give you answers." - Pablo Picasso

19. If you think patience is a virtue, try surfing the net without high-speed Internet.

18. The real danger is not that computers will begin to think like men, but that men will begin to think like computers.

17. “The Internet?  We are not interested in it.” - Bill Gates, 1993

16. The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards.

15. "Being able to break security doesn’t make you a hacker any more than being able to hotwire cars makes you an automotive engineer." - Eric S. Raymond

14.  I'm sorry that I'm not updating my Facebook status, my cat ate my mouse.

13. "I am not out to destroy Microsoft, that would be a completely unintended side effect." - Linus Torvalds

12. Dear humans, in case you forgot, I used to be your Internet. Sincerely, The Library.

11. My wife never gives up. She is so insistent that she entered the wrong password over and over again until she managed to convince the computer that she's right!

10. Computer dating is fine if you're a computer.

9. I love my computer because all my friends live inside it!

8. The only relationship I have is with my Wi-Fi. We have a connection.

7. The problem with troubleshooting is that trouble shoots back.

6. Why can't cats work on the computer? They get too distracted chasing the mouse around.

5. My wife loves me so much, she tries her best to attract me to her. The other day she put on a perfume that smells like a computer.

4. I changed my password everywhere to 'incorrect.' That way when I forget it, it always reminds me, 'Your password is incorrect.'

3. A computer lets you make more mistakes faster than any invention in human history--with the possible exceptions of handguns and tequila."

2. Life is too short to remove USB safely.

1. Passwords are like underwear: you don’t let people see it, you should change it very often, and you shouldn’t share it with strangers.


I hope you enjoyed our latest list of amusing computer quotes!

Continue reading →

How to Install Raspbian OS on Raspberry Pi 3 Model B+

After my Raspberry Pi 3 Model B+ First Impressions, allow me to share with you how I installed Raspbian OS on this tiny computer as promised. But first a quick introduction about Raspbian. This lightweight Unix-like operating system is based on Debian Linux and is highly optimized to run on Raspberry Pi’s ARM CPU. Its desktop environment is called PIXEL (Pi Improved X-Window Environment, Lightweight), which is made up of a modified LXDE desktop environment and the Openbox stacking window manager. It comes pre-loaded with useful applications such as web browser, office suite, programming tools, and several games among others.


Now, let’s get down to business and give you some of the requirements needed to install Raspbian OS. If your Raspberry Pi is not bundled with a microSD card you should get one with at least 8GB of space. Some of the basic PC accessories required for setup are USB keyboard, USB mouse, and a computer or TV monitor (preferably with HDMI port). The Raspberry Pi Model B+ has an HDMI port  for video output. So if your monitor has DVI or VGA port, you should have an HDMI-to-DVI or HDMI-to-VGA cable. You will also need an extra desktop or laptop computer for downloading the OS and then flashing it to the microSD card.


The next thing that you should prepare is the installer. You can download it from HERE. It is recommended to Download the NOOBS version, but if you are adventurous enough you can go for the full Raspbian version of the installer. The file that you will download is compressed in ZIP format so you will need to extract the OS image (.img) to use it. After extracting, you may now proceed to flash the OS image to your SD card. To do that, you will need to download the recommended tool for the job HERE. Install it, and then follow the simple step by step process of flashing the OS image to your microSD card. It is also worth noting that you will need an SD card adapter and an SD card reader if your laptop or PC don’t have one built-in.

Finally, the Raspbian OS is now installed, and all you have to do is eject the microSD card from your computer and plug it in your Raspberry Pi. Connect all the needed Raspberry Pi peripherals and power up your tiny but very capable Linux desktop machine.

Continue reading →