How I Built FeedHive, a Real-Life Tech Stack for SaaS Products

How I built FeedHive, a real-life tech stack for SaaS products

Simon (@SimonHoiberg ) is a super developer on Twitter. He inspires many people with his tweets on development, javascript/typescript, entrepreneurship, and much more. He’s also the Founder of FeedHive , a platform built around the idea of growing and handling your social media.

As someone with a passion for development and open-source, he is very open to talking about the challenges, the decisions, and the stack behind FeedHive. He also open-sourced some of the libraries used in their production code, for example, the popular NodeJS typed Twitter library .

That’s good enough for an intro; next, Simon will guide you through the FeedHive tech stack and how he built it.

FeedHive Runs Serverless

Serverless means that your cloud provider is taking care of provisioning, maintaining, and scaling the infrastructure that executes the code running on the backend, freeing you from dealing with storage systems, databases, containers, clusters, or any of that.

When you’re bootstrapping a SaaS business, this is an attractive solution, given the pay-per-use pricing model associated with Serverless.

What does pay-per-use mean? It means, for example, that you pay per 100ms of execution time of your cloud functions, or the read/write of your database, or the read/put of files on your file storage.

This solution is effective in 2 ways. First and foremost, it automatically scales as users come in and demand goes up, but it also allows you to save money by eliminating paying for idle time.

Is this always the best and cheaper solution? No, but when starting and depending on your use case, it can be super cheap to run. For example, on AWS, I pay 0.08$ per user per month, on average.

In practice, it translates to about 1 paid user out of 62 users to cover the entire platform costs (server, storage, DB, etc).

But what does effectively mean to run serverless? Let’s next break it down into the individual components. Keep in mind though that the services we used are all provided by AWS, but if you prefer GCP or Azure, there are equivalent services in each of these.


For our database solution, we wanted to follow the same principle we talked about before, we wanted a serverless, fully managed solution that can scale to our needs while being cost-effective. We decided to move forward with DynamoDB .

DynamoDB is a key-value and document database that is insanely fast at any scale. It’s fully managed by Amazon, and as you would expect it integrates with other services within AWS, like security, backups, caching, security, etc.

It’s schemaless, so besides a key, it doesn’t rely on a predefined schema. You can think of this as a combination of relational and document-based databases such as MongoDB.

API Layer

When you think about APIs there are basically 3 options, you can either build a REST api or a GraphQL API, or a healthy combination of both.

For FeedHive, we use GraphQL, as we want the flexibility and adaptability that comes from being able to define the data and the relations right on the requests (queries).

AWS has a solution called AppSync which is basically a fully managed GraphQL API that allows developers to securely access, write, delete, and combine data from multiple data sources.

One important data source integration for AppSync is DynamoDB, the integration allows you to handle a DynamoDB table automatically without the need for a custom GraphQL resolver.

Finally, when we say secure, AppSync supports multiple methods of authentication including API tokens, Auth0, and AWS IAM among others.

Lambda Functions

FeedHive backend is powered entirely by TypeScript, executed in a Node.js environment running on individual Lambda Functions that live in the cloud.

This is an extremely powerful alternative to managing and provisioning servers for your Back-End.

What do we use lambda functions for? For building business logic, creating custom resolvers for GraphQL, etc. Even though AppSync provides direct integration with DynamoDB, some operations require custom code, and lambda is here to help.

User Authentication

For FeedHive, you log in using Twitter, and we federate the login through AWS Cognito .

Cognito makes it possible for users to log in through a social identity provider (like Twitter) and handling user authorization, permissions, and even user data.

Thanks to Cognito there’s no need to maintaining user tables, user passwords, or any of that. It will handle user data and authentication through user roles to access the right resources on AWS.

The Front End

So far we covered everything running behind the scenes, the back-end of our application, let’s now focus on the front-end.

The FeedHive UI is built using React + TypeScript + Recoil and hosted in AWS using S3 + CloudFront, a popular solution for static sites. The combination of S3 + CloudFront makes it super easy to deploy our application and to distribute it to users all over the world.

Client-Server Communication

We mentioned that FeedHive runs on GraphQL and AWS, so naturally, it uses the Amplify library from AWS .

Amplify enables us to integrate a lot of the things we have covered so far. Most importantly, it makes it really easy to manage authentication using Cognito.

We only felt the need to deviate from Amplify as the library to make calls to the GraphQL API. Even though Amplify provides support for AppSync, its functionality is rather limited.

Alternatively, I chose to use the Apollo Client from Apollo GraphQL , which provides a more feature-rich library for working with GraphQL, bringing in features like caching and state management.

Testing the Application

Juan says it all the time “Testing, testing, testing…”, and on my side I’m also a huge fan of TDD (Test Driven Development). I use TDD every time the situation allows me to, and I use it for FeedHive.

TDD brings resilience to the application, we covered both FE and BE with tests written and run by Jest, and we make sure of having both unit and integration tests for each model and for the end-to-end solution.

Tests are also run automatically, but let’s focus on that when we talk next about deployment.


We try to automate deployments as much as possible, so we use GitHub Actions to run the pipelines, allowing us to define the entire application and deployment specs as code using YAML or JSON.

As part of the pipeline, we run tests, including end-to-end tests powered by Cypress .

To make it easy to handle environments, we utilize the Serverless Framework . The Serverless Framework is a tool that enables us to develop, manage, package, and deploy Lambda Functions along with all the resources they need to run, with absolute ease!

It allows us to use infrastructure-as-code for the entire cloud infrastructure.


When building a SaaS product you need to consider multiple factors and make many decisions along the way. I hope this post helps you get some ideas for what your ideal tech stack would look like.

The biggest takeaway from the tech stack that I have chosen for FeedHive , is that it’s serverless!

  • No need to manage servers.
  • Pay-per-use instead of paying for uptime.
  • Serverless is inherently scalable.
  • Reducing cost.
  • Energy-efficient, and cheap.

Good luck with architecting your next SaaS Product 🚀

If you liked what you saw, please support my work!

Simon Høiberg - Author @ Live Code Stream

Simon Høiberg

My name is Simon, I’m a Full Stack Software Engineer from Copenhagen, Denmark.

During the past 7 years of working with software development, I’ve gained insight in a large area of technical domains ranging from both front and back end development as well as system architecture and cloud infrastructure.

I’ve been heavily invested in the JavaScript ecosystem and have in-depth knowledge and experience with Node, React, GraphQL, and Express.

Additionally, I have a great interest in DevOps and build-systems. I am well-versed in the universe of Amazon Web Services and have set up countless Webpack configurations including large-scale setups and taylormade frameworks.

I’m also an active contributor to the open-source community and am maintaining multiple open-source projects on GitHub.

Likewise, I’m actively writing articles on Medium and conducting workshops.