Subtopic: Software development

Company and industry news, featured projects, open source code, tech tips, and more.

Crash course on emulating the MOS 6510 CPU

Michael Argentini Avatar
David PipkinMonday, January 26, 2026

Creating an emulator is a very powerful experience. Not only do you become intimately familiar with the target hardware, but you virtually recreate it with simple keystrokes. Emulators aim to recreate every single component of a piece of hardware down to the electrical timing of a single CPU cycle, not to be confused with simulators, which simply mimic a platform. It’s not always a simple task, but it’s very rewarding when you power it up and are presented with a piece of childhood memory. Sure, there are plenty of emulators out there, but creating your own makes using it and tweaking it so much more fun.

The MOS 6510 is a modified version of the popular MOS 6502. The 6502 was used in systems like the Apple IIe, Atari 800 and 2600, Commodore VIC-20, Nintendo Famicom, and others. The biggest difference with the MOS 6510 is the addition of a general purpose 8-bit I/O port.

Why the MOS 6510?

I wanted to emulate a Commodore 64... Why? The Commodore 64 was a staple of my childhood. Before graphical OS’es and the internet, there was just imagination and a command prompt. Pair that with some BASIC programming books that my dad left lying around and I felt like I had the world at my fingertips. I wanted to become more familiar with the first computer I ever used. The C64 is simple and complex at the same time, and its internal workings intrigued me.

The MOS 6510 was the CPU that the C64 used. To emulate a full C64 machine, you would also need to emulate the MOS 6581 SID (sound), MOS VIC-II (display), MOS 6526 CIA (interface adapters), I/O and more, but this article focuses on the heart of it all – the CPU. The memory in a C64 is also outlined, because without memory, the CPU can’t do very much.

Let’s get started

First off, this article is, as mentioned in the title, a crash course. So, I won’t be going into a lot of detail. It’s more of a primer for those of you who are interested in MOS 6510 emulation, and something to send you off in the right direction.

The basic cycle your emulator will perform will be the following:

  • Read next instruction from memory at the PC (program counter)

  • Process instruction

  • Process Timers on CIA1 and CIA2 (not covered in this article)

  • Update screen via VIC-II (not covered in this article)

  • Calculate cycles (for emulators that aren’t cycle-exact)

The last point only applies if you are making an instruction-exact emulator vs. a cycle-exact emulator. Instruction-exact emulation is easier because you simply process an instruction and increment by the number of cycles that instruction is supposed to take, but it is less accurate and may result in some features of the system not working exactly right. Cycle-exact emulation only processes one CPU cycle per loop in your emulator, so one instruction could be performed over multiple loops. That method is very accurate but is more complex to implement as you will need to be more granular in how you process instructions.

MOS 6510 CPU

The CPU is responsible for processing instructions from memory. It’s an 8-bit processor, which means the registers will store 8 bits each, other than the PC (program counter), which is 16 bits (high and low bytes) so that it can store a memory location.

Click to read the MOS 6510 data sheet Click to read the MOS 6510 data sheet

To emulate the processor, you will need to implement the following components...

Registers

Registers are small areas of memory located directly in the processor that have extremely fast access. Each register has a purpose and can be used in various ways depending on the context of an instruction.

  • PC (program counter)
    Stores the active address in memory.

  • S (stack pointer)
    Pointer to current location in stack, which starts at 0x01FF in memory and grows downward to 0x0100

  • P (processor status)
    See status flags below

  • A (accumulator)
    Stores arithmetic and logic results

  • X (index register)
    Used for modifying effective addresses

  • Y (index register)
    Used for modifying effective addresses

Status flags for P register

The status flags are used in the byte that makes up the P register. They can alter the way certain things behave when set or unset and provide status outcomes for operations.

  • N (1 – negative flag)

  • V (2 – overflow flag)

  • X (4 – unused flag)

  • B (8 – break flag)

  • D (16 – decimal mode flag)

  • I (32 – interrupt disable flag)

  • Z (64 – zero flag)

  • C (128 – carry flag)

Addressing modes

Addressing modes determine where an instruction finds a value to work with. One instruction can have many variations that use different addressing modes, these are called opcodes.

  • Implied
    Operand is in accumulator, no addressing needed. This is for one byte instructions that operate on the accumulator

  • Immediate
    Operand is at byte after instruction, no addressing needed

  • Relative
    Address at PC +/- value of byte after instruction (interpreted as signed byte). This is used for branching. It basically allows the PC to branch from -128 to 127 bytes from its current position

  • Zero Page
    Address at byte after instruction

  • Zero Page X
    Address at byte after instruction + X register

  • Zero Page Y
    Address at byte after instruction + Y register

  • Absolute
    Address at word after instruction

  • Absolute X
    Address at word after instruction + X register

  • Absolute Y
    Address at word after instruction + Y register

  • Indirect
    Address at memory which is pointed to by word after instruction

  • Indirect X
    Address at memory which is pointed to by word after instruction + X register

  • Indirect Y
    Address at memory which is pointed to by word after instruction + Y register

Click for more information on MOS 6510 addressing modes.

Instruction set

There are too many instructions to list in this crash course, but here is a link to an opcode matrix. It shows the value of each opcode, the associated instruction and addressing mode, as well as the logical function of each.

Timing

One of the most important aspects of emulating the CPU is the timing. For my C64 emulator, I used the PAL specification of 0.985 MHz, or 985,000 cycles/second. If you are implementing the NTSC specification, then you would use 1.023 MHz. As I said before, if not implementing cycle-exact emulation, you need to determine how many cycles each instruction takes and increment the cycles that have passed. This is important for determining when certain IRQ’s should be fired as well as the progress of the raster line when implementing the VIC-II. The raster line position will have to match the CPU cycles (screen refresh is 50 Hz on PAL, 60 Hz on NTSC) so that programs which rely on raster line position to create certain graphical effects will work.

Also, keep in mind that certain things take extra cycles. For instance, if an instruction uses Absolute Y addressing and crosses the page boundary in memory, that takes an extra CPU cycle.

Endianness

The MOS 6510 is a little endian chip. This means that when you are reading a word from memory (16 bits on the MOS 6510), you will need to read in the second address position first, followed by the first address position. You can then use the result to create a 16 bit variable in your programming language of choice. A simple example of this is as follows:

(peek(address + 1) << 8) | peek(address)

Where peek() grabs a byte from a memory location. The byte from the second address location is bit shifted 8 positions left and is then bitwise OR’ed with the byte from the first address location.

Memory

The C64 has 65,535 bytes of memory, or 64 KB. Bank switching is used to switch the ROM and I/O in and out by changing the latch bits in the first two bytes of memory. A page of memory on the C64 is 256 bytes. The first page is called the Zeropage and is easily addressable with zeropage addressing, which is fast because the C64 is an 8-bit machine.

Here is a basic mapping of the C64’s memory:

  • 0x0000-0x00FF: Zeropage – first two bytes contain directional and latch bits that can be set to swap ROM’s and I/O in and out of place.

  • 0x0100-0x01FF: Stack

  • 0x0200-0x03FF: OS

  • 0x0400-0x07FF: Screen

  • 0x0800-0x9FFF: Free RAM for BASIC programs

  • 0xA000-0xBFFF: BASIC ROM or free RAM for machine language programs when ROM switched out

  • 0xC000-0xCFFF: Free RAM for machine language programs

  • 0xD000-0xDFFF: CHAR ROM or I/O or Sprite data, interrupt register, etc. when CHAR ROM and I/O switched out.

  • 0xE000-0xFFFF: KERNEL ROM or free RAM for machine language programs when ROM switched out

When I/O is switched on, the region 0xD000-0xDFFF maps to the following:

  • 0xD000-0xD3FF: VIC-II registers

  • 0xD400-0xD7FF: SID registers

  • 0xD800-0xDBFF: Color memory

  • 0xDC00-0xDCFF: CIA1

  • 0xDD00-0xDDFF: CIA2

  • 0xDE00-0xDEFF: I/O 1

  • 0xDF00-0xDFFF: I/O 2

Click for a more detailed Commodore 64 memory map.

Initialization

There are two important factors when initializing your emulator – the PC and directional/latch bits in the first two bytes of memory.

The first byte of memory, which contains directional bits, should be initialized to 0xFF. The second byte, which contains the latch bits, should be initialized to 0x07 (00000111). This will enable the KERNEL ROM, BASIC ROM, and I/O. The CHAR ROM and underlying memory of these locations will not be accessible unless the banks are switched.

The PC should be initialized to the word read from memory location 0xFFFC. This will read from the KERNEL ROM due to the latch bits initialization.

Summary

That concludes the crash course. Hopefully you’re at least a little more informed about the MOS 6510 than before. The only external pieces you will need to obtain in order to create your emulator are the three ROMs as mentioned above – BASIC, CHAR and KERNEL. These can usually be obtained somewhere online or from another emulator. It’s a lot of work to emulate anything, but it’s a fun project and worth it in the end.

Want to know more?

There's usually more to the story so if you have questions or comments about this post let us know!

Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!

.NET 10 released

Michael Argentini Avatar
Michael ArgentiniWednesday, January 14, 2026

Microsoft announced the general availability of .NET 10, describing it as the most productive, modern, secure, and high-performance version of the platform to date. The release is the result of a year-long effort involving thousands of contributors and includes improvements across the runtime, libraries, languages, tools, frameworks, and workloads. The benefits can be seen even if you only use it as a drop-in replacement for .NET 9.

Some of the key improvements to the framework include:

  • JIT compiler enhancements: Better inlining, method devirtualization, and improved code generation for struct arguments

  • Hardware acceleration: AVX10.2 support for cutting-edge Intel silicon, Arm64 SVE for advanced vectorization with Arm64 write-barrier improvements reducing GC pause times by 8-20%

  • NativeAOT improvements: Smaller, faster ahead-of-time compiled apps

  • Runtime optimizations: Enhanced loop inversion and stack allocation strategies deliver measurable performance gains

  • Post-quantum cryptography: Expanded PQC support helps future-proof your applications against quantum threats while maintaining compatibility with existing systems

  • Enhanced networking: Networking improvements make apps faster and more capable

  • AI frameworks: Building AI-powered apps in .NET 10 is straightforward, from simple integrations to complex multi-agent systems

In addition to library and package features, C# 14 and F# 10 deliver powerful language improvements that make your code more concise and expressive. C# continues to be one of the world’s most popular programming languages, ranking in the top 5 in the 2025 GitHub Octoverse report.

Related platforms are also being updated to run on .NET 10, like Umbraco CMS version 17. So if you're running a prior version now is the time to upgrade!

Want to know more?

There's usually more to the story so if you have questions or comments about this post let us know!

Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!

Sfumato

Michael Argentini Avatar
Michael ArgentiniWednesday, January 7, 2026

Sfumato is a pair of tools that generate CSS for your web-based projects. You can create HTML markup and use pre-defined utility class names to style the rendered markup, without actually writing any CSS code or leaving your HTML editor! The resulting HTML markup is clean, readable, and consistent. And the generated CSS file is tiny, even if it hasn't been minified!

The first tool is the command line interface (CLI), which you can install once and use on any projects. It will watch/build CSS as you work.

The second tool is a Nuget package that you can add to a compatible .NET project. And after adding a snippet of code to your startup, it will build/watch as you run your app or debug, generating CSS on the fly.

Build complex layouts with simple CSS classes and without writing CSS Build complex layouts with simple CSS classes and without writing CSS

Features

Sfumato is compatible with the Tailwind CSS v4 class naming structure so switching between either tool is possible. In addition, Sfumato has the following features:

Cross-platform — Both the Sfumato CLI tool and nuget package work on Windows, Mac, and Linux on x64 and Arm64/Apple Silicon CPUs. They're native, multi-threaded, and lightning fast! So you can build anywhere.

Low overhead — Unlike Tailwind, Sfumato doesn't rely on NodeJS or any other packages or frameworks. Your project path and source repository will not have extra files and folders to manage, and any configuration is stored in your source CSS file.

Dev platform agnostic — The Sfumato CLI works great in just about any web project:

  • JavaScript frameworks, like React, Angular, etc.

  • Hybrid mobile projects, like Blazor Hybrid, Flutter, etc.

  • Content Management Systems (CMS) like WordPress, Umbraco, etc.

  • Custom web applications built with ASP.NET, PHP, Python, etc.

  • Basic HTML websites

Works great with ASP.NET — Sfumato supports ASP.NET, Blazor, and other Microsoft stack projects by handling "@@" escapes in razor/cshtml markup files. So you can use arbitrary variants and container utilities like "@container" by escaping them in razor syntax (e.g. "@@container").

In addition to using the CLI tool to build and watch your project files, you can instead add the Sfumato Core Nuget package to your ASP.NET-based project to have Sfumato generate CSS as you debug, or when you build or publish.

Imported CSS files work as-is — Sfumato features can be used in imported CSS files without any modifications. It just works. Tailwind's Node.js pipeline requires additional changes to be made in imported CSS files that use Tailwind features and setup is finicky.

Better dark theme support — Unlike Tailwind, Sfumato allows you to provide "system", "light", and "dark" options in your web app without writing any JavaScript code (other than widget UI code).

Adaptive design baked in — In addition to the standard media breakpoint variants (e.g. sm, md, lg, etc.) Sfumato has adaptive breakpoints that use viewport aspect ratio for better device identification (e.g. mobi, tabp, tabl, desk, etc.).

Integrated form element styles — Sfumato includes form field styles that are class name compatible with the Tailwind forms plugin.

More colorful — The Sfumato color library provides 20 shade steps per color (values of 50-1000 in increments of 50).

More compact CSS — Sfumato combines media queries (like dark theme styles), reducing the size of the generated CSS even without minification.

Workflow-friendly — The Sfumato CLI supports redirected input for use in automation workflows.

Want to know more?

There's usually more to the story so if you have questions or comments about this post let us know!

Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!

The .NET DUID

Michael Argentini Avatar
Michael ArgentiniTuesday, December 9, 2025

DUID is a fully-featured replacement for GUIDs (Globally Unique Identifiers). They are more compact, web-friendly, and provide more entropy than GUIDs. We created this UUID type as a replacement for GUIDs in our projects, improving on several GUID shortcomings.

We pronounce it doo-id, but it can also be pronounced like dude, which is by design :)

You can use DUIDs as IDs for user accounts and database records, in JWTs, as unique code entity (e.g. variable) names, and more. They're an ideal replacement for GUIDs that need to be used in web scenarios.

Key security features:

  • Uses the latest .NET cryptographic random number generator

  • More entropy than GUID v4 (128 bits vs 122 bits)

  • No embedded timestamp (reduces predictability and improves strength)

  • Self-contained; does not use any packages

Usage features:

  • High performance, with minimal allocations

  • 16 bytes in size; 22 characters as a string

  • Always starts with a letter (can be used as-is for programming language variable names)

  • URL-safe

  • Can be validated, parsed, and compared

  • Can be created from and converted to byte arrays

  • UTF-8 encoding support

  • JSON serialization support

  • TypeConverter support

  • Debug support (displays as string in the debugger)

Nuget

Yes, you can also find DUID on nuget. Look for the package named fynydd.duid.

Usage

Similar to Guid.NewGuid(), you can generate a new DUID by calling the static NewDuid() method:

var duid = Duid.NewDuid();

This will produce a new DUID, for example: aZ3x9Kf8LmN2QvW1YbXcDe. There are a ton of overloads and extension methods for converting, validating, parsing, and comparing DUIDs.

Here are some examples:

// Represents an empty DUID (all zeros); "AAAAAAAAAAAAAAAAAAAAAA"
var emptyDuid = Duid.Empty;

// Get a string value for a DUID
var duid = Duid.NewDuid();
var duidString = duid.ToString();

if (Duid.TryParse("aZ3x9Kf8LmN2QvW1YbXcDe", out var duid)
{
    // Successfully parsed DUID
}

if (Duid.IsValidString("aZ3x9Kf8LmN2QvW1YbXcDe"))
{
    // Successfully validated
}

if (duid1 == duid2)
{
    // Comparison works as expected
}

There is also a JSON converter for System.Text.Json that provides seamless serialization and deserialization of DUIDs:

var options = new JsonSerializerOptions();
options.Converters.Add(new DuidJsonConverter());

var user = new User
{
    Id = Duid.NewDuid(),
    FirstName = "Turd",
    LastName = "Ferguson"
};

var json = JsonSerializer.Serialize(user, options);

/*
    json =
    {
        id: "xw5x7Kf6LmN3QvW1YbXcc0",
        firstName: "Turd",
        lastName: "Ferguson"
    }
*/

Want to know more?

There's usually more to the story so if you have questions or comments about this post let us know!

Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!

Should your software project have more than one developer?

Michael Argentini Avatar
Michael ArgentiniMonday, July 7, 2025

Picture this: your mission-critical software project is in full swing. Timelines are tight and deliverables are complex. Then, out of nowhere, your lead developer needs extended time off, or perhaps moves on to a new opportunity. On small projects, this is a headache—but on mid to large software projects it can be a full-blown crisis. But you were proactive. You had your software development partner cross-train a backup. Crisis averted.

On larger projects, the complexity of the codebase, the number of integrations, and the coordination required across teams make it essential to have more than one person deeply familiar with its inner workings. A backup developer isn’t just a safety net—they’re a critical part of maintaining project velocity and quality when team members are unavailable. With cross-training there’s always someone who can step up and keep the project moving, ensuring that timelines and business goals are met.

Plus, the benefits extend beyond risk management. Backup developers help foster a culture of collaboration and accountability. When multiple developers understand the system, it encourages better documentation, smarter code reviews, and provides a larger base of technical knowledge. Ultimately, for appropriately sized and mission critical software app and platform projects, investing in a backup developer will protect your investment. It’s peace of mind that your project won’t grind to a halt over a single absence.

But how?

Adding one or more backup developers doesn’t have to double your costs or slow down the team. Just be smart about it. Cross-training can be done efficiently by including backup developers in meetings, writing thorough documentation, and pairing them with leads during onboarding and major feature development. This approach ensures knowledge transfer without disrupting velocity or exceeding the budget.

Want to know more?

There's usually more to the story so if you have questions or comments about this post let us know!

Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!

Flavors of Blazor

Michael Argentini Avatar
Michael ArgentiniFriday, June 27, 2025

Blazor is a powerful framework from Microsoft used for building interactive web UIs with C# instead of JavaScript. A key feature of Blazor is its flexibility in how applications are hosted and run. The choice of hosting model—Server, WebAssembly, Interactive Auto, or Hybrid—depends entirely on the specific needs of the application, such as scale/performance requirements, offline capabilities, and access to native device features.

But which flavor of Blazor should you use? Well, that depends...

Blazor server

The Blazor Server hosting model is the easiest to set up and use. It runs your application on the server, and when a user interacts with the application, UI events are sent to the server over a real-time (SignalR) connection. The server processes these events, calculates the necessary UI changes, and sends only those small changes back to the client to update the display. This results in a very thin client and a fast initial load time, as almost no application code is downloaded to the browser.

Best reasons to use this hosting model:

  • internal business applications or other scenarios where a constant, low-latency connection to the server is guaranteed
  • applications that need direct access to server-side resources, databases, or protected services that shouldn't be exposed to the client
  • applications that don't have enterprise-scale traffic needs, or where the cost of hosting those resources is not prohibitive (e.g. regional instances for best performance)
  • when the development cycle "inner loop" must be as fast and efficient as possible

Blazor WebAssembly (WASM)

In contrast, Blazor WebAssembly runs your entire application directly in the web browser using a WebAssembly-based .NET runtime. The application's C# code, its dependencies, and the .NET runtime itself, are all downloaded to the client. Once downloaded, the application executes entirely on the user's machine, enabling full offline functionality and leveraging the client's processing power for a rich, near-native user experience.

Best reasons to use this hosting model:

  • for public-facing websites, progressive web apps (PWAs), and applications that require complex, desktop-like interactivity without constant server communication
  • when a larger initial download size and longer first load time are not a concern
  • when hosting cost is a concern; WebAssembly apps can be hosted on inexpensive file-based hosting platforms, like Amazon S3
  • if your web application needs to support large amounts of traffic or will service an international audience
  • when development cycle "inner loop" iteration time is not a concern

Blazor interactive auto

The Blazor interactive auto mode allows you to use both server and WebAssembly components in a single project, giving you precise control over how your app behaves.

Best reasons to use this hosting model:

  • applications that are ideal for server hosting but also have some user experiences that need higher performance or support larger audiences
  • when the complexity of configuring a WebAssembly project is not a concern
  • when development cycle "inner loop" iteration time is not a concern

Blazor hybrid

Blazor hybrid is a bit different. It's not used for building web applications. It's allows web developers to use their skills to build mobile apps that run on devices at close to native speed. Microsoft Maui is the core platform, which is native and cross-platform. It normally uses XAML for coding user interfaces. When using Blazor Hybrid, however, you can also use Blazor web components alongside XAML or in place of it.

This model provides the best of both worlds: the ability to build a rich, cross-platform UI with web technologies while having full access to the native capabilities of the device, such as the file system, sensors, and notifications.

Blazor hybrid is the perfect solution for developers looking to create desktop and mobile applications that can share UI components and logic with an existing Blazor web application, or for new mobile app projects.

Want to know more?

There's usually more to the story so if you have questions or comments about this post let us know!

Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!

When microservices make sense

Michael Argentini Avatar
RJ NaderFriday, June 20, 2025

Many organizations are eager to adopt microservices, sometimes before they even know if they need them. Knowing when they fit a need makes all the difference, and sometimes, not using them is the smarter move.

There are some cases where a microservice architecture is your best bet:

1. When you need to combine incompatible technologies

If your project has to support multiple technologies that don’t naturally work together, microservices are a natural fit. Take my experience with the Whitelist Sync Web project:

Originally, this project ran on a 100% .NET backend with a Vue frontend. Later, I migrated to a Node backend with a React frontend. However, I still needed to support SignalR—Microsoft’s real-time communication technology—because client applications in the field were dependent on it. The challenge? SignalR server-side hosting is only supported in C#. Node cannot host a SignalR hub.

Removing SignalR from the project wasn’t an option (unless I was willing to rewrite and redeploy all the client apps—which was out of scope). The solution was to create a separate SignalR microservice: a C# project dedicated to SignalR, communicating with the Node backend through JWT auth and REST endpoints. A reverse proxy routed /hubs/ requests to the SignalR service, while all other traffic hit the React app. The entire setup was managed using Docker Compose.

2. When extending legacy projects

Microservices can be helpful for extending existing applications. If you want to add new functionality using a different tech stack—or isolate new features for a big team—they let you do this without rewriting your monolith.

3. High availability and fault tolerance

Splitting your app into smaller, independently hosted pieces means a failure in one service won’t crash the entire application. Of course, you can build robust error handling into a monolith, but microservices can make fault isolation easier.

4. Better load balancing

Cloud providers offer load balancing for monoliths, but microservices can provide more granular scaling. Just keep in mind, if you don’t have heavy load or growth requirements, this might not be worth the extra complexity and cost.

5. Technology flexibility

Microservices let you mix and match tech: imagine a Node backend, a React frontend, and a Python microservice for AI features. Each part of your app can use the best tool for the job.

The downsides of microservices

While microservices have their place, they also come with significant downsides:

1. High cost

Running multiple services means more infrastructure, more devops, and more cloud spend. If your application has low demand, this cost is often unjustified. Starting new projects with a single stack keeps things cheaper and simpler.

2. Operational overhead

Multiple services means more to manage: logging, monitoring, orchestration (hello, Kubernetes), and maintenance. All of this adds to the operational burden.

3. Vendor lock-in

Using cloud-specific services like Azure Functions ties your app to one provider. Migrating later is possible, but few businesses want to refactor dozens of microservices just to escape rising costs.

4. Deployment complexity

Deploying a monolith is straightforward. Microservices require complex CI/CD pipelines and orchestration. Tools like Fynydd fdeploy can help, but they add yet another layer of infrastructure.

5. Development complexity

With more moving parts, it’s harder to add features, fix bugs, and onboard new team members.

6. Authentication challenges

Microservices make authentication harder. Instead of just handling user auth, you now need to manage service-to-service authentication, which can be complicated and error-prone.

When not to use microservices

Given all these costs, it’s clear: Start simple. For most projects, especially those with low load or a single technology stack, a monolith is the best starting point. Design your application with modularity and future growth in mind, so you can break it into microservices if you ever need to. But don’t jump into microservices unless you’re solving real problems that require them.

Further reading: You Don’t Need Microservices (itnext.io)

Want to know more?

There's usually more to the story so if you have questions or comments about this post let us know!

Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!

Have a chat with your data

Michael Argentini Avatar
Michael ArgentiniThursday, June 5, 2025

Tools like Google NotebookLM and custom generative AI services are fundamentally changing how users interact with information. We're seeing a transition from static reports and interfaces to dynamic chat-based tools that give users exactly what they need, and even things they didn't know they needed.

If you're not familiar with NotebookLM, it's a tool that allows you to provide your own documents (like PDF, text files, audio), and then chat with the data. You can even listen to an AI-generated podcast that explains all the information. For example, I had loaded a project with PDF documents containing the rule book, technical rules, and officials briefing information for USA Swimming, and was then able to get answers to questions like "how is a breaststroke turn judged?"

It was kinda magical.

We've been working with clients on permutations of this scenario for some time. For example, we partnered with a client in the life sciences space to build a chat-based tool that connects various third party API services with disparate information, providing account managers with a single source for helping their customers recommend products and services to ensure better health outcomes.

This is no small feat when the goal is a best-of-breed user experience (UX) like ChatGPT. It can involve multiple service providers like Microsoft Azure and Amazon Web Services, as well as various tools like cloud-based large language models (LLM), vector search, speech services, cloud storage, charting tools, location services, AI telemetry, and more. But when it's done right, the result is amazing. You can ask questions that span disciplines and contexts and see results you may not have ever seen before.

Most organizations can really benefit from exploring how generative AI can positively impact their offerings and give them a competitive advantage. Like we always say, it's not about the organizations that use AI, it's about the ones that don't.

Want to know more?

There's usually more to the story so if you have questions or comments about this post let us know!

Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!

Website builder service or custom website?

Michael Argentini Avatar
Michael ArgentiniFriday, May 9, 2025

There is a world full of "build your own website" services that allow just about anyone to stand up a new website in a few hours. Even organizations can leverage the simplicity offered by these services to set up an online store, community, and more. Here are a few examples of why people typically choose these services.

  • Quick setup and time to market
  • Reasonable up-front pricing
  • Design templates
  • Integrated services, like shopping carts and email
  • Managed hosting

Sounds great! But as with everything in life, there are tradeoffs.

  • Quick setup and time to market means giving up control over things like your domain name, web app design, email provider, and more
  • Reasonable up-front pricing usually means a tiered pricing model with add-on pricing for essential features like a custom domain name, additional bandwidth, and increased storage
  • Design templates mean your web app will largely look like a lot of other web apps that use the service, and may not match your vision, and custom designs can require service-specific web development
  • Integrated services also means no choice over the provider of the service, which could be missing features you need
  • Managed hosting means scaling (growing) is significantly more expensive, network bandwidth caps can apply, and true customer and data ownership are dubious

Regardless, these services can be a great way for individuals and small organizations to bootstrap their web presence, and in many cases, you can happily continue to use the service for years.

But there are also long-term lock-in issues that can be more serious, potentially impeding your growth, for example:

  • You may contractually own your data, but extracting it to migrate to another platform is usually not practical or possible at all; they don't want you to leave
  • When the service changes (features, pricing, etc.) or if the service is purchased by another entity, you usually have no choice other than rolling with it, for better or worse
  • If the service shuts down, you're going to struggle to replace everything they offered to your visitors in a relatively short period of time
  • Most successful businesses will outgrow these services anyway, so you could be missing out on long-term savings

Custom websites

If the tradeoffs are too much to swallow, fear not! You can also go with a custom web app tailored specifically to your needs and budget. It can match your vision without compromises and scaling can be managed more easily as your business or traffic grow.

So how do you get started? With a builder service you first have to find one with the price and features you need, and then create an account and dig into their control panel to start configuring your website. Whereas for a custom website the first step is to find a web development partner you can rely on for advice and technical expertise, like Fynydd. Your partner can help gather your ideas, come up with a plan, and build your web app, all within your budget and timeline. They're usually experts in both new web app projects and migrations from other platforms and services. Most importantly, they fill the knowledge gap left by the "build your own website" service.

A web development partner will choose technologies that have a proven security track record. One way we do this is by consulting the CVE database; a publicly funded global resource for tracking common vulnerabilities and exposures. For example, a CVE search quickly reveals that WordPress has historically been a security nightmare.

Your development partner will help you with a design that matches your vision, a hosting service that meets your needs and budget, a security review, a backup plan and disaster recovery strategy, and more. When the time comes to grow your platform, they can help with that too. And throughout the journey you maintain full control over your brand, your website, your data, and your customers.

Want to know more?

There's usually more to the story so if you have questions or comments about this post let us know!

Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!

Supercharge offshore development

Michael Argentini Avatar
Michael ArgentiniWednesday, April 30, 2025

When it comes to offshore software development teams, managing quality and risk creates value. But you also need experienced leadership and oversight for long-term success. Simply adding offshore bodies to a project rarely works, and has diminishing returns.

Here are some process tips for mitigating risk and getting the most value from an offshore team.

  • Also leverage an onshore development partner for leadership and critical systems design. They can create and direct strategy, ensure developers follow patterns, address compliance and security concerns, and perform code reviews to ensure quality. They can also write better code faster, which best suits critical systems development. This is what we do at Fynydd and it works.
  • If possible, your offshore team should mirror your operating hours. Otherwise communications, troubleshooting, and overall progress will lag. It can be beneficial to have expanded availability for handling off-hour requests, but that means the offshore team needs decision-making authority. Otherwise someone in the organization will also need to be available during those hours.
  • Try to get dedicated resources for the long term. When there are offshore staff changes, require that they fully train the replacement(s) before additional staff are brought in. It takes time! New developers, even when they are superstars, need to learn a platform's ins-and-outs before they can meaningfully contribute.
  • Be explicit about who is running the project, give them the appropriate decision-making authority, and enforce a workflow that puts them between ideas and action. Ideally this would be a lead developer from your onshore partner.
  • If the offshore team has novice developers or otherwise low-performers, make sure they are in a learning role and not expected to work on key infrastructure.
  • Perform code reviews. Bad or inefficient code should not be tolerated and is a learning experience that can make your offshore team better.
  • Rely on your lead development partner to facilitate communication. If you find it difficult to communicate with your offshore team, your lead development partner has experience in picking up the nuance, including technical jargon that's hard to understand in any language.

Avoiding the big problems

Some of the issues you'll encounter can be avoided by engaging with an onshore development partner. Here are some tips for keeping the app or service quality high and the risk low.

  • A proper architecture and coding patterns are critical to long-term success. Without a good evolving architecture and consistent coding patterns maintenance is difficult, code readability suffers, and security vulnerabilities are harder to avoid.
  • Compliance is a bear, even with experienced developers. This can range from ensuring organization brand standards, to complying with regional legal requirements (like GDPR), and avoiding copyright violations. There needs to be a focus on these concerns which yields appropriate strategies and resolutions on a consistent basis.
  • Bad code quality is a risk. It's not just about performance and user experience. Bad code could leak information or have vulnerabilities. It could allow bad actors to misuse your app or service. Worse yet, it could facilitate the abuse of your customers.
  • A focus on security is not optional. Properly securing an app or service requires a development team that not only has a security focus, but also the experience and awareness required to implement and maintain a solid security posture. The team members have to be vetted resources with no geopolitical encumbrances and a level of trust commensurate with the app or service in question. For example, bank or government clients may require background checks.
  • Maintaining intellectual capital is crucial. You invested time and money into building a knowledge base as well as an app or service. You need to ensure that the knowledge gained building your app or service will not vanish into the ether.

Want to know more?

There's usually more to the story so if you have questions or comments about this post let us know!

Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!

© 2026, Fynydd LLC / King of Prussia, Pennsylvania; United States / +1 855-439-6933

By using this website you accept our privacy policy. Choose the browser data you consent to allow:

Only Required
Accept and Close