Quickstart
Hextacy is a library aimed to provide a flexible infrastructure for writing backend applications. It provides you with out of the box implementations so you can focus on the business aspects of your application without reinventing the wheel. Hextacy is a little opinionated, but it tries to be as less intrusive as possible. If its implementations do not suit your needs, it provides you with a set of traits so you can always roll out your own.
This booklet serves to explain why hextacy is built the way it is, as well as provide some context for the traits it exposes.
CLI tool coming soon™
Design
In order to understand why hextacy is built the way it is, we first need to understand how its pieces tie together to provide a flexible project infrastructure.
Hextacy is based on hexagonal architecture also known as the ports and adapters, layered, or onion architecture. You can read great articles about it here and here.
At the core of this kind of architectural design is the business layer. The business layer represents the problems the application is designed to solve and as such it largely depends on the requirements. It contains the entity definitions the application will work with. If you take a look at some of the diagrams that are used to represent these architectures, you will always see the business layer in the middle (the core). Each subsequent layer will depend on the previous one and you will usually see arrows pointing from the outer most layers to the inner ones.
This follows the D
of SOLID - dependency inversion. For example, at the outer layers of the application is the UI, which depends on the API of various services the application exposes, which depend on the domain entities/services of the business layer. Since the business layer is in the middle, it contains no dependencies and is standalone. As such, the core layer of the application is self-sufficient and should build successfully on its own, even when no concrete implementations are plugged into it.
When we model the application core, we must provide it access to domain entities without coupling it to any concrete way of obtaining those entities. We do so by defining the core logic through behaviour - in rust, we define this behaviour through traits.
As an example, Repositories provide methods through which Adapters can interact with to get access to application Entities.
A repository contains no implementation details about a concrete persistence backend. It is simply an interface which adapters utilise for their specific implementations to obtain the underlying model.
When business level services need access to domain entities, they couple themselves to repositories. By coupling the services only to the repositories, we gain the flexibility of swapping various implementations without ever touching the core logic.
Even though we've talked only about repositories, this paradigm will be present in every aspect of our application. To name a few more examples, our application could contian some caching requirements and some kind of notification mechanism when certain events occur. We also want to design those in a manner where we hide away the implementations of those mechanisms from the core logic.
In addition to having the internals decoupled, we are also decoupled from any potential interactors the application could use to access the core. You can think of interactors as the front-end to the application - since we have a standalone core, it becomes irrelevant whether we use HTTP, a desktop program or a CLI to access it.
Keeping all of this in mind, we will next explore how we can implement these patterns in rust.
Core
The next few sections of the booklet will provide some examples on how to model a decoupled application and how hextacy can be utilised to efficiently write application code while hiding away rust's unavoidable boilerplate.
Requirements
Let's imagine we are tasked with creating an authentication service. We choose an auth service because it is simple enough for everyone to understand while still being able to highlight the importance of a layered architecture. For brevity's sake, we will keep the service very simple and we will not provide a logout
method for user retention. After an intense brainstorming session we have determined the following:
The service must:
-
expose 2 methods:
register
andlogin
. -
be able to work with 2 models (entities):
User
andSession
. -
notify any interested third parties a user registered via a message broker.
Implementation
For brevity, we will not be writing out the application plumbing (imports, errors, etc.) because we want to focus solely on the design. Full examples with plumbing can be viewed in the examples directory.
Models (Entities)
First things first, we have to define the application models:
#![allow(unused)] fn main() { pub struct User { id: Uuid, username: String, password: String, created_at: NaiveDateTime, // from chrono } pub struct Session { id: Uuid, user_id: Uuid, created_at: NaiveDateTime, expires_at: NaiveDateTime, } }
These models must be kept separate from ORM-specific entities. Any entity obtained from an ORM must be convertable to its respective application model. Here the From
trait is our friend, but we will omit the implementation as it is straightforward.
ORM entities are distinct (and confusingly named the same way) from our application entities, which from now on we will refer to as application models. An entity is a concept from domain driven design representing a data structure with semantic meaning to our application. Since we are dealing with authentication, the User
and Session
structs are the application entities as they represent core concepts from the real world. Each entity (application model) must be uniquely identifiable - as such, the ID generation for those entities must be in the hands of our app, rather than the underlying persistence implementation.
Repository
We now define a set of interactions with a persistence layer. You can think of repositories as contracts an adapter must fulfill for it to be injected into a service.
#![allow(unused)] fn main() { #[async_trait] pub trait UserRepository<C> { async fn get_by_username( &self, conn: &mut C, username: &str, ) -> Result<Option<User>, AdapterError>; async fn create( &self, conn: &mut C, username: &str, password: &str, ) -> Result<User, AdapterError>; } #[async_trait] pub trait SessionRepository<C> { async fn get_valid_by_id( &self, conn: &mut C, id: Uuid, ) -> Result<Option<Session>, AdapterError>; async fn create( &self, conn: &mut C, user: &User, expires: bool, ) -> Result<Session, AdapterError>; } }
The service will now be able utilise these definitions and in doing so won't be coupled to any particular implementation. If you're wondering why the C
, we could theoretically design a repository with no generics, but it will introduce problems later down the line when we stray off the happy path.
Service
We now define the core authentication service struct. For the time being we will disregard the message broker requirement and focus solely on the first 2.
#![allow(unused)] fn main() { pub struct Authentication<D, UR, SR> { driver: D, user_repo: UR, session_repo: SR, } }
Since we do not know which adapters the service will be instantiated with, we must define it in terms of generics. Another option would be to define the *_repo
fields using trait objects, i.e. Box<dyn UserRepository<C>>
, but then we would have to introduce another generic for the connection, namely C
, which arguably does not help us when we enter generics hell in the next step when defining the core functionality.
We now define the login
method.
#![allow(unused)] fn main() { use hextacy::Driver; #[async_trait] impl<D, UR, SR> Authentication<D, UR, SR> where D: Driver + Send + Sync, D::Connection: Send, UR: UserRepository<Driver::Connection> + Send + Sync, SR: SessionRepository<Driver::Connection> + Send + Sync, { async fn login( &self, username: &str, password: &str, remember: bool, ) -> AppResult<Session> { let mut conn = self.repo.connect().await?; let user = match self.user_repo.get_by_username(&mut conn, username).await { Ok(Some(user)) => user, Ok(None) => return Err(AuthenticationError::InvalidCredentials.into()), Err(e) => return Err(e.into()), }; let valid = hextacy::crypto::bcrypt_verify(password, &user.password)?; if !valid { return Err(AuthenticationError::InvalidCredentials.into()); } let session = self .session_repo .create(&mut conn, &user, !remember) .await?; Ok(session) } } }
In the first circle of generics hell we can observe the famous Send and Sync bounds from the async rust habitat...
In the impl block's definition, we introduced the necessary generics for the service and we've bound those generics to the traits we want the service to use. We are essentially saying to the compiler "The authentication struct can use the login
method if and only if its driver
field implements Driver
and its *_repo
fields can work on the connection obtained from that driver".
The Driver trait is a completely generic trait that exposes one method - connect
. It is literally just
#![allow(unused)] fn main() { #[async_trait] pub trait Driver { type Connection; async fn connect(&self) -> Result<Self::Connection, DriverError>; } }
We need this trait because we've defined our repository to take in a generic C
and now we can obtain that C
from the driver. We still don't know which connection that will be - this is the whole point of the Driver
trait and is how our service still remains oblivious to the adapter it will use.
Because the generics are bound to repositories we get access to the necessary repository methods and can get a hold of our application models. So far, no implementation details are exposed to the service. The only thing the service is aware of is that it can create some connection and use that connection for its repositories.
The real beauty of using a driver is in the next step, when we define our register
method.
#![allow(unused)] fn main() { // Same impl block as for the `login` method async fn register(&self, username: &str, password: &str) -> AppResult<Session> { let mut conn = self.driver.connect().await?; match self.user_repo.get_by_username(&mut conn, username).await { Ok(None) => {} Ok(Some(_)) => return Err(AuthenticationError::UsernameTaken.into()), Err(e) => return Err(e.into()), }; let hashed = hextacy::crypto::bcrypt_hash(password, 10)?; let user = self.user_repo.create(&mut conn, username, &hashed).await?; let session = self.session_repo.create(&mut conn, &user, true).await?; Ok(session) } }
...but this just looks like the login method, what's up?
We now stray from the happy path.
Transactions
Imagine the above session_repo.create
call failed and the function returned an error. A user would still be created, but they would receive no session and they wouldn't be granted application access.
This might not be a big deal for our simple auth service since the user could just login and continue on with their life, but imagine things are not so simple.
Imagine we have to execute multiple state changes to multiple repositories. When there are multiple pending state changes, we want to persist those changes only if all of them succeed, and conversely we want to revert all changes if any of them fail. For this we need transactions. In order to use transactions, we must devise a way for our driver, specifically its connection, to allow us to perform atomic queries with it. Most connections/db clients provide this out of the box with 3 simple methods:
start_transaction
commit_transaction
rollback_transaction
For this purpose, hextacy provides this functionality on any generic connection via the Atomic trait.
Because transactions usually operate on the same connections, i.e. queries on a connection that started a transaction will all be executed within that transaction's context, we get the answer to the age old question of "Why put the C
in the repository?".
If our repository methods did not take in a C
, then we would not be able to pass a transaction through multiple repository calls.
We now update the register method to support transactions and isolate the creation of users and sessions to a neat little function. //
marks lines added/changed.
#![allow(unused)] fn main() { use hextacy::{Atomic, Driver}; #[async_trait] impl<D, UR, SR> Authentication<D, UR, SR> where D: Driver + Send + Sync, D::Connection: Atomic + Send, // UR: UserRepository<D::Connection> + UserRepository<<D::Connection as Atomic>::TransactionResult> + // Send + Sync, SR: SessionRepository<D::Connection> + SessionRepository<<D::Connection as Atomic>::TransactionResult> + // Send + Sync, { pub async fn register(&self, username: &str, password: &str) -> AppResult<Session> { let mut conn = self.driver.connect().await?; match self.user_repo.get_by_username(&mut conn, username).await { Ok(None) => {} Ok(Some(_)) => return Err(AuthenticationError::UsernameTaken.into()), Err(e) => return Err(e.into()), }; let hashed = hextacy::crypto::bcrypt_hash(password, 10)?; let mut tx = conn.start_transaction().await?; match self // .create_user_and_session(&mut tx, username, &hashed) .await { Ok(session) => { <Repo::Connection as Atomic>::commit_transaction(tx).await?; Ok(session) } Err(e) => { <Repo::Connection as Atomic>::abort_transaction(tx).await?; Err(e) } } } pub async fn create_user_and_session( // &self, tx: &mut <Repo::Connection as Atomic>::TransactionResult, username: &str, password: &str, ) -> AppResult<Session> { let user = self.user_repo.create(tx, username, password).await?; let session = self.session_repo.create(tx, &user, true).await?; Ok(session) } } }
...and in the 9th circle of generics hell we can observe the impenetrable wall of ultimate bounds
I know, I know - who in their right mind would want to write all of this out? Our service has only 2 repositories and already half of our file is noisy generics. While we are reaping the benefit of having atomic queries we've stumbled upon another problem - boilerplate. We'll figure that one out in the next section, but first let's focus on how the code differs from our original implementation.
Now, before we start with the state changes in our database we start a transaction. This is possible because we've bound the driver's connection to Atomic
. When we get the results of create_user_and_session
, we make sure to perform the necessary action on the transaction, ensuring the changes are only committed if everything was successful. This is where rust absolutely shines because we have total control on each of our interactions.
One other thing to note for this approach is encapsulation. Since now the service is responsible for obtaining connections, one could argue that the driver does not belong in the service implementation logic since it is doing what is supposedly the repository's job. Repositories can be designed with no generics, as stated previously, and this would allow the service to completely remove the driver from its definition. This is a completely valid decision if one does not need atomicity in their queries and makes defining services with Box<dyn Repository>
a great option. On the other hand, when we need transactions, the service always has the necessary context to reason about whether or not a transaction should succeed and should be left up to the service, in which case the C
is unavoidable.
In the next section we'll tear down the wall of generics and streamline the process of writing services using hextacy.
Core ⬡
If we take a look at the last impl block in the previous section, we can notice a pattern. We see the 2 repositories pretty much have the same driver bounds and everything has our beloved Send
bound. If we were to add more, the pattern would repeat. Fortunately, rust provides us with excellent tooling to eliminate hand written repetition - macros! You know, those things you use to annotate your structs to print them to the terminal and stuff.
#![allow(unused)] fn main() { use hextacy::{component, transaction}; #[component( use D as driver, use UserRepo, SessionRepo )] #[derive(Debug, Clone)] pub struct Authentication {} #[component( use D:Atomic for UR: UserRepository, SR: SessionRepository, )] impl Authentication { pub async fn register(&self, username: &str, password: &str) -> AppResult<Session> { let mut conn = self.driver.connect().await?; match self.user_repo.get_by_username(&mut conn, username).await { Ok(None) => {} Ok(Some(_)) => return Err(AuthenticationError::UsernameTaken.into()), Err(e) => return Err(e.into()), }; let hashed = hextacy::crypto::bcrypt_hash(password, 10)?; let session: Session = transaction!( conn: D => { let user = self.user_repo.create(&mut conn, username, &hashed).await?; let session = self.session_repo.create(&mut conn, &user, true).await?; Ok(session) } )?; Ok(session) } } }
Ain't it neat?
Now that we've seen how a decoupled service would look like in 'vanilla' rust, we can dive in the component
and transaction
macros. The macros create the exact same code we've had to create by hand in the last part of the last section.
The first invocation of the component
macro on the struct definition creates a completely generic struct whose fields are exactly the same as the hand written implementation (PascalCase gets transformed into snake_case). For convenience, it also receives an associated new
function.
The second invocation takes the annotated impl block and 'injects' all the necessary generics and binds them to their respective types. This macro gives us a simple and concise way of specifying the repository components this service will use.
The transaction
macro allows us to easily write atomic queries without having to match the result every time. It takes in a connection (the variable conn
in our case) and uses it to start a transaction before running whatever is inside the block. The block must return a Result<T>
to be usable in the macro. Because we have the AppResult
type, which is just Result<T, AppError>
, we can use thiserror
to easily create the From
implementations for our global AppError
and question everything where applicable. If any operations fail, the closure returns an error and the transaction is aborted.
Another cool thing about the component
macro is that it can be used on structs with existing fields and impl blocks. To demonstrate we'll add our final requirement, the message broker.
#![allow(unused)] fn main() { use hextacy::{component, transaction, queue::Publisher}; // #[derive(Debug, Serialize)] // pub struct UserRegisteredEvent { id: Uuid, username: String, } #[component( use D as driver, use UserRepo, SessionRepo, Publisher // )] #[derive(Debug, Clone)] pub struct Authentication<Existing> { // e: Existing // Just to demonstrate foo: usize, // } #[component( use D:Atomic for UR: UserRepository, SR: SessionRepository, )] impl<P, E> Authentication<P, E> // where P: Producer, // E: Debug // Ordering matters here, existing stuff goes after the macro stuff { pub async fn register(&self, username: &str, password: &str) -> AppResult<Session> { let mut conn = self.driver.connect().await?; match self.user_repo.get_by_username(&mut conn, username).await { Ok(None) => {} Ok(Some(_)) => return Err(AuthenticationError::UsernameTaken.into()), Err(e) => return Err(e.into()), }; let hashed = hextacy::crypto::bcrypt_hash(password, 10)?; let session: Session = transaction!( conn: D => { let user = self.user_repo.create(&mut conn, username, &hashed).await?; let session = self.session_repo.create(&mut conn, &user, true).await?; self.publisher // .publish(UserRegisteredEvent { id: user.id, username: user.username, }) .await?; Ok(session) } )?; Ok(session) } } }
We've added some existing generics to the struct and it still works! The ordering is important though, so we have to keep in mind the existing generics, i.e. generics outside the component
macro are always the last items in the struct.
We've also added a publisher through the macro (we could've explicitly added it, but it's more concise with component
) and in the impl block we've bound it to Producer
which enables us to publish any structs that can be serialized. Do note if the publishing fails, neither of the last 2 state changes are applied. The service doesn't know where it'll be publishing, but that is not its concern and is up to the implementation.
And that would be the end of our core logic - we've met the extreme requirements posed on us and designed a service with only the business, albeit not a very complex one. Most importantly, we haven't leaked any implementation details into the service. Instead, we've bounded its generic parameters to contracts which concrete instances must fulfil in order for the service to be constructed. Traits rule!
Now we actually need to get the thing running, which is what we'll be exploring in the next section.
Infrastructure
So far we have only been dealing with behaviour, now it's time to implement that behaviour on concrete units. There are 2 main pieces of infrastructure our application is missing; The interaction and the plumbing.
Since we all know what an HTTP controller is, we'll be creating one with axum
for the interaction. We choose HTTP because most people are familiar with it and it's the simplest to set up, though we could've chosen anything because nothing in the service specifies how it should be interacted with. Honourable mentions include a desktop or CLI app.
For the plumbing, i.e. database and queue implementations, we'll be using postgres and redis with pubsub. We'll use them because, again, they are familiar to most people, but we could've chosen anything so long it can be plugged in as a Driver
with its Atomic
connection, and the publisher satisfies the Producer
trait.
Adapters
Since we'll be using postgres, generally we need to do the following:
- Create migrations that will define our
users
andsessions
tables, run them - Scan our schema with an ORM, in our case sea-orm (optional)
- Create ORM entities that correspond to our SQL data
We won't go over these steps because they depend on the implementation, you may or may not use an ORM depending on preference. In any case, the first step is always performed. Since we'll be using sea-orm, we perform step 2, and subsequently sea-orm will generate the necessary ORM entities, completing step 3. All we need to do now is write the From
implementations for our application models. The ORM entities allow us to perform queries on their respective tables.
For more detail see migr, a very simple tool for generating migrations, how to generate entities with sea-orm, and the examples directory.
Now that we have the necessary entities to perform database queries, we can create our adapter. For brevity, we'll be showcasing only the UserAdapter
here, the session adapter can be viewed in the examples.
#![allow(unused)] fn main() { #[derive(Debug, Clone)] pub struct UserAdapter; #[async_trait] impl<C> UserRepository<C> for UserAdapter where C: ConnectionTrait + Send + Sync, { async fn get_by_id(&self, conn: &mut C, id: Uuid) -> Result<Option<User>, AdapterError> { UserEntity::find_by_id(id) .one(conn) .await .map_err(AdapterError::SeaORM) .map(|u| u.map(User::from)) } async fn get_by_username( &self, conn: &mut C, username: &str, ) -> Result<Option<User>, AdapterError> { UserEntity::find() .filter(Column::Username.eq(username)) .one(conn) .await .map_err(AdapterError::SeaORM) .map(|user| user.map(User::from)) } async fn create( &self, conn: &mut C, username: &str, password: &str, ) -> Result<User, AdapterError> { let user: UserModel = User::new(username.to_string(), password.to_string()).into(); UserEntity::insert(user) .exec_with_returning(conn) .await .map(User::from) .map_err(AdapterError::SeaORM) } } }
Ah, finally we see some action! The code is pretty self-explanatory so we won't go over it in too much detail.
ConnectionTrait
is the sea-orm specific trait which can be passed into the exec
calls on entities. This trait is implemented directly on a sea_orm::DatabaseConnection
and sea_orm::DatabaseTransaction
. Fortunately, most ORMs provide a connection trait so we don't have to implement the adapters for both their connection and transaction - that would be painful. We can obtain a C: ConnectionTrait
via the sea-orm driver - a thin wrapper around a sea-orm connection pool that implemments Driver
, making it suitable for our service.
Quick sidenote:
There are ORMs that start transactions in place on connections. These implement Atomic
by starting the transaction and then just returning the connection. The reason Atomic
exists is because of these different implementations, we need a way to abstract away the specific way a transaction is started, we do so with the Atomic::TransactionResult
.
One small thing to note is that we want to keep UUID generation within our control. Giving control to the database would mean the most critical part of our model is out of the application's control which would introduce problems later down the line if we ever need to switch our adapters. Here we're handling the ID generation in the user's new
function.
For the publisher, we can use hextacy's RedisPublisher. It has the ability to create a producer for any given message as long as it implements Serialize
. It implements the Producer
trait which is just what we need.
Since we will at some point have to make a concrete instance of our service, to reduce the boilerplate of specifying every one of its components wherever we use it, we create a type alias:
#![allow(unused)] fn main() { pub type AuthenticationService = Authentication< SeaormDriver, UserAdapter, SessionAdapter, RedisPublisher, >; }
Now instead of specifying (and inevitably changing) the adapters everywhere we want to use the service, we have a single centralised location where we define its configuration and use this type wherever we want to use the service.
We'll figure out how we manage the necessary state for it in a bit because first we'll define the controllers.
Controllers
In this part we'll hook up a single handler function to a service because they look the same for each, give or take a cookie/header.
We now define the HTTP handler for the service's login
function.
#![allow(unused)] fn main() { #[derive(Debug, Deserialize, Validify)] pub struct Login { #[validate(length(min = 1))] pub username: String, #[validate(length(min = 1))] pub password: String, pub remember: bool, } pub async fn login( State(service): State<AuthenticationService>, Json(data): Json<LoginPayload>, ) -> Result<Response<String>, Error> { let Login { username, password, remember, } = Login::validify(data).map_err(Error::new)?; let session = service.login(&username, &password, remember).await?; let session_id = session.id.to_string(); let cookie = session_cookie("S_ID", &session_id, false); MessageResponse::new("Successfully logged in") .into_response(StatusCode::OK) .with_cookies(&[cookie])? .json() .map_err(Error::new) } // Helper for creating a cookie pub fn session_cookie<'a>( key: &'a str, value: &'a str, expire: bool, ) -> Cookie<'a> { CookieBuilder::new(key, value) .path("/") .domain("mysupercoolsite.com") .max_age(if expire { Duration::ZERO } else { Duration::days(1) }) .same_site(SameSite::Lax) .http_only(true) .secure(true) .finish() } }
The first thing we do is define the data object we intend to accept from the client. The Validify
derive macro exposes a validify
method for the struct. It also creates a payload struct which we use in the login
handler. The first argument to this function is the service type we've defined earlier, wrapped in an axum::extractor::State
. Through it we obtain a reference to the concrete service.
In the next section, we'll see how we can manage state and hook everything up so we have a working application.
State
So now that we have the application core, a way to talk to it, and a way for it to obtain the data, we can now tie everything together.
We declare a State
struct in which we keep references to concrete drivers, the concrete constructor for our service, and we move the concrete type alias here.
#![allow(unused)] fn main() { use hextacy::adapters::db::sql::seaorm::SeaormDriver; use hextacy::adapters::queue::redis::RedisMessageQueue; use hextacy::adapters::queue::redis::RedisPublisher; pub type AuthenticationService = Authentication< SeaormDriver, UserAdapter, SessionAdapter, RedisPublisher, >; #[derive(Debug, Clone, State)] pub struct AppState { #[env("DATABASE_URL")] #[load_async] pub repository: SeaormDriver, #[env( "RD_HOST", "RD_PORT" as u16, "RD_USER" as Option, "RD_PASSWORD" as Option, )] pub redis_q: RedisMessageQueue, } impl AuthenticationService { pub async fn init(state: &AppState) -> AuthenticationService { AuthenticationService::new( state.repository.clone(), UserAdapter, SessionAdapter, state .redis_q .publisher("my-channel") .await .expect("Could not create publisher"), ) } } }
Neat!
For each field annotated with env
, the State
derive macro will attempt to call the type's associated new
function, loading variables from std::env
beforehand and passing them to the call. Luckily, both of these structs have them so we get an AppState::load_repository_env
function and the same for redis_q
. The as
will attempt to parse the value of the env variable before passing it to new
.
In the impl block for the service we set it up by calling it's new
function, created from the component
macro. All the components being passed satisfy the service's bounds. Here it's worth mentioning that the adapters are zero-sized, meaning they do not actually allocate any memory and are here simply to satisfy the bound restriction of the service, a sort of behaviour struct. The repository is cloned, which clones only the underlying reference to the connection pool and a publisher is created.
Finally, the main function.
#[tokio::main] async fn main() { hextacy::env::load_from_file("path/to/.env").unwrap(); let state = AppState::configure().await.unwrap(); let (host, port) = ( env::get_or_default("HOST", "127.0.0.1"), env::get_or_default("PORT", "3000"), ); info!("Starting server on {addr}"); let router = router(&state).await; axum::Server::bind(&addr.parse().unwrap()) .serve(router.into_make_service()) .await .expect("couldn't start server"); } pub async fn router(state: &AppState) -> Router { use crate::controllers::http::auth::*; let auth_service = AuthenticationService::init(state).await; let router = Router::new() .route("/register", post(register)) .route("/login", post(login)); Router::new().nest("/auth", router).with_state(service) }
And we have a working app! We haven't talked about how the files are set up, because this largely depends on preference and is ultimately arbitrary.
Next up, we'll ensure our app works by writing some tests.
Testing
Driver
TODO
Atomic
TODO