Ash Framework: Combining Ash Writes with Electric Reads
Exploring how Ash implements Ecto.
Ecto expresses domain logic through schemas and changesets, handling inserts, updates, and deletes via Repo. It is powerful and flexible but can involve a lot of boilerplate. Because it is largely unopinionated, Ecto relies on developer discipline to keep operations safe and consistent. For example, it is easy to fall into the habit of reusing overly broad, permissive changesets across different actions.
Ash, by contrast, is opinionated by design. The Ash Framework book explains this philosophy as “Model your domain, and derive the rest”. Instead of writing boilerplate, you declare your business logic once and let Ash handle the implementation. Resources define schemas and actions for you, with sensible defaults out of the box. I revisited my usual Superhero Dispatch domain to explore Ash’s ergonomics, particularly with Ecto. Ash resources also include after_action hooks, which we can use to broadcast changes and keep LiveViews in sync (Chapter 10 Delivering Real-Time Updates with PubSub). PubSub is the typical approach to implementing real-time, but I wanted to try out Electric.
Electric works with Phoenix.Sync to handle real-time updates in a different way. Instead of sending and receiving messages, Electric watches the Postgres write-ahead log and streams changes directly to clients. No subscriptions, no manual broadcasting, no message ordering to manage. Just synchronization that flows from the database itself.
View the code for this blog post
Or, clone the branch to run locally
git clone --branch Ash-Electric-1 --single-branch https://github.com/JKWA/electric-ash.git
Hero Dispatch
Superhero Dispatch is a simple workflow: incidents are reported and available heroes get assigned.
Multiple dispatchers need to see the same information in real time.
Let’s take a look at the domain model.
Superhero
Superheroes have a name, alias, powers, status, and location.
attributes do
uuid_primary_key :id
attribute :name, :string do
allow_nil? false
public? true
end
attribute :hero_alias, :string do
allow_nil? false
public? true
end
attribute :powers, {:array, :string} do
default []
allow_nil? false
public? true
end
attribute :status, :atom do
constraints one_of: [:available, :unavailable, :dispatched]
default :available
allow_nil? false
public? true
end
attribute :current_location, :string do
public? true
end
create_timestamp :inserted_at
update_timestamp :updated_at
end
They may lose their powers, represented by an empty list, and we might not know their location. Their status can be :available, :unavailable, or :dispatched.
So far this feels a lot like Ecto, except for one thing: public? true. In Ash, every attribute has visibility. By default, they are private, so we have to explicitly mark them public? true to make them available through actions. It splits what users can read from what the system needs to track. This visibility system is introduced in Chapter 1 (Building Our First Resource).
We can also take actions on superheroes.
actions do
defaults [:read, :destroy]
create :create do
primary? true
accept [:name, :hero_alias, :powers, :current_location]
end
update :update do
primary? true
accept [:name, :hero_alias, :powers, :current_location]
end
update :mark_dispatched do
accept []
validate attribute_equals(:status, :available),
message: "Can only dispatch available heroes"
change set_attribute(:status, :dispatched)
end
update :mark_available do
accept []
validate(attribute_equals(:status, :dispatched),
message: "Can only mark dispatched hero as available"
)
change(set_attribute(:status, :available))
end
update :mark_unavailable do
accept []
validate(attribute_equals(:status, :available),
message: "Can only mark available heroes as unavailable"
)
change(set_attribute(:status, :unavailable))
end
update :return_to_duty do
accept []
validate(attribute_equals(:status, :unavailable),
message: "Only unavailable heroes can return to duty"
)
change set_attribute(:status, :available)
end
end
The defaults [:read, :destroy] line gives us Ash’s built-in read and delete actions.
Notice that :status isn’t in the accept list for :create or :update. Our domain has specific rules for status transitions, so we don’t want users setting it arbitrarily.
Ash also provides helper functions like validate and attribute_equals for common checks.
Where Ash really stands out is that each action defines its own changeset logic. We don’t have to hunt for the “right” changeset or risk using a generic one that ignores domain rules.
And perhaps most importantly, look how easy it is to read.
This is Ash’s opinion showing through: explicit visibility with public?, actions as first-class operations, and no changeset reuse (Chapter 2 Extending Resources with Business Logic).
Incident
Incidents are the emergencies that need superhero attention, capturing the lifecycle from initial report through resolution.
Each incident has a type, description, and location. Priority ranges from :low to :critical, defaulting to :medium.
The status field follows an incident lifecycle: :reported → :dispatched → :in_progress → :resolved. From any status, an incident can be closed or reopened. Each transition has a corresponding timestamp (:reported_at, :dispatched_at, etc.), leaving an audit trail.
The hero_count attribute shows how many superheroes are currently assigned. This is a computed value that gets recalculated whenever assignments change (more on this later).
Most attributes are required (allow_nil? false), except for the timestamp fields which start as nil and populate as the incident moves through states.
The actions follow the same pattern as superheroes: specific actions for status transitions, with :status protected from the generic :update action.
The interesting one is :mark_closed. When we close an incident, there’s business logic to handle:
update :mark_closed do
require_atomic? false
accept []
change ArchiveAllAssignmentsOnClose
change set_attribute :status, :closed
change set_attribute :closed_at, &DateTime.utc_now/0
change ResetHeroCountOnClose
end
Archive all active assignments, set the status, and reset the hero count to zero. The superhero actions use inline functions, but these use Ash’s strategy of extracting them into reusable (testable) change modules (ArchiveAllAssignmentsOnClose, ResetHeroCountOnClose). The book encourages extracting complex logic into change modules like these, making them testable and reusable across actions.
Adding Real-Time Reads with Electric
The usual pattern is to query the data, then use PubSub to push updates. This carries known risks: forgetting to subscribe or publish, assuming every client received every message, and believing they arrived in the correct order.
Electric sidesteps those risks.
To test out Electric, I’ll use Ash for writes and Electric with Phoenix.Sync for reads.
User Action → LiveView → Ash (write) → Postgres → WAL → Electric → All Clients
Writes go through Ash. All business logic, validation, and state transitions happen via Ash actions. Reads come from Electric, which consumes the Postgres WAL and streams changes to LiveView.
Ash maintains domain integrity. Electric handles the complexity of fanning out changes.
The Integration
Electric can run standalone or integrated with a Phoenix app. I’m using the integrated mode, which embeds Electric directly into the Phoenix supervision tree.
In LiveView, set up streams for the data we want to sync:
def mount(_params, _session, socket) do
{:ok,
socket
|> assign(:page_title, "Superhero Dispatch")
|> sync_stream(:incidents, Incident, id_key: :id)
|> sync_stream(:superheroes, Superhero, id_key: :id)}
end
Then handle incoming sync events:
def handle_info({:sync, event}, socket) do
{:noreply, sync_stream_update(socket, event)}
end
Note that we’re passing Ecto schemas directly (Incident, Superhero), not Ash queries. There’s no first-party Ash integration yet, so Electric works at the Ecto/Repo level. This means we lose Ash’s authorization and policy features on reads.
What Happens Behind the Scenes
Electric uses a “shape” to define what data to sync. A shape is a subset of a table, optionally filtered by a WHERE clause. If a shape matching your query already exists, Electric reuses it. Otherwise, it creates a new shape.
defp available_superheroes_query do
from s in Superhero,
where: s.status == :available
end
Now, whenever a hero’s status is updated to :available, this shape sends a create event, whenever the status changes from :available it sends a delete event, and whenever information for an :available hero changes, the shape emits an update event.
Behind the scenes, Electric:
- Consumes Postgres’s logical replication stream (from the WAL)
- Matches changes against active shapes
- Appends matching changes to each shape’s log
- Streams log entries to clients as
:syncmessages sync_stream_updateapplies them to LiveView streams
Because changes come from Postgres’s WAL (a sequential log), ordering is guaranteed. What you see is what actually happened in the database, in the order it happened.
Limitations
Electric’s shapes don’t support joins. It can sync entire tables, filter rows with WHERE clauses, and select specific columns, but it can’t join across tables.
Without joins, we need to denormalize data.
For instance, the Incidents table has a :hero_count attribute to track how many superheroes are currently assigned. Normally you’d join to the assignments table and count rows. Instead, we maintain the count explicitly (denormalize):
update :hero_count do
require_atomic? false
accept []
change RecalculateHeroCount
end
This action gets called whenever an assignment is created or destroyed, keeping the count in sync.
Assignment
Denormalization comes up again when we link superheroes to incidents.
Assignments create this link. The database relationship prevents double-booking, so a superhero cannot be assigned to multiple incidents at once.
Normally, to display an incident with its assigned heroes, you would join incident → assignments → superheroes. Again, without joins, we need to denormalize.
When creating an assignment, we copy part of the hero’s data onto the record. This change module takes name and hero_alias from the cached hero context and stores them directly on the assignment, so the table includes superhero_name and superhero_alias alongside superhero_id.
If a superhero later changes their alias, existing assignments still show the old one. Assignments are historical records, so preserving the hero’s name at the time of assignment probably makes sense.
The challenge with denormalization is not the extra columns but keeping data consistent.
When destroying an assignment, we mark the hero as available again, recalculate the incident’s hero count, and log any unexpected states. With Ash, these are expressed as declarative changes inside the action:
destroy :destroy do
require_atomic? false
soft? true
change fn changeset, _ctx ->
original_status = changeset.data.status
Ash.Changeset.put_context(changeset, :original_status, original_status)
end
change set_attribute :archived_at, &DateTime.utc_now/0
change set_attribute :status, :completed
change UpdateHeroAndIncidentOnUnassignment
end
Creating an assignment works the same way. We fetch the hero and incident, validate their states, copy the hero data, update the hero’s status to dispatched, and increment the incident’s count, all declared in the :create action through change modules.
This is where Ash shines. Each action defines how data should change and when, keeping domain rules close to the data they govern.
Does it work?
For me, not quite yet. Here’s a specific issue I’m seeing. This is using embedded Electric with Phoenix.Sync (both beta).
After assigning and removing Wonder Woman from an incident twice, the live view shows the correct state:

Wonder Woman is no longer assigned and shows as available. Perfect.
But refreshing the page reveals a problem. The initial render flickers stale data:

Both archived assignments appear as if they’re still active. Then Electric catches up by sending delete events:
[debug] Sync event: {:"$electric_event", :assignments, :delete,
%SuperheroDispatch.Dispatch.Assignment{
id: "065ae08c-4bd8-4680-bb59-719b59a32407",
status: :assigned,
superhero_name: "Diana Prince",
superhero_alias: "Wonder Woman",
...
}, []}
[debug] Sync event: {:"$electric_event", :assignments, :delete,
%SuperheroDispatch.Dispatch.Assignment{
id: "1a585966-b7d9-4e21-8cd3-eaf45fef796a",
status: :assigned,
superhero_name: "Diana Prince",
superhero_alias: "Wonder Woman",
}, []}
Electric is generating a shape from this query:
defp assignments_query(incident_id) do
from a in Assignment,
where: a.incident_id == ^incident_id and is_nil(a.archived_at)
end
The issue: I’m receiving an incorrect snapshot. It includes both Wonder Woman assignments simultaneously, a state that never existed in the database. Then it streams delete events to catch up. The snapshot should reflect the current state as it exists in Postgres right now, not a state that requires catch-up events to become valid.
In my testing, repeatedly adding, removing, and updating values causes the state to drift from the database. Superheroes show as available when they’re actually assigned, or vice versa. Clicking the Force Refresh button applies a benign update to all heroes, which triggers new events and resyncs the state:
SuperheroDispatch.Repo.update_all(
Superhero,
set: [updated_at: DateTime.utc_now()]
)
But here’s the good news: open multiple views and they always show the same state. It might flicker, it might be wrong, but they all show the same state.
Worth noting: this has nothing to do with Ash. Electric watches the Postgres WAL, it sees INSERT/UPDATE/DELETE events regardless of whether they came from Ash, raw Ecto, or manual SQL. Ash is completely transparent to Electric’s sync layer.
Closing Thoughts
This was an experiment, not a verdict. Ash is great for domain modeling and Electric’s architecture is compelling. The issues I’m seeing could be specific to embedded mode, the Phoenix.Sync integration, or perhaps something I’ve misconfigured.
That said, solving the problem of keeping clients in sync without handwritten event code is powerful. It removes an entire layer of orchestration and makes real-time behavior a property of the system rather than something the application has to manage.
What gives me a bit of pause is the shift in where truth lives. Electric’s architecture relies on caching, so the question is whether we have enough visibility and control at that level to detect drift and force a resync when needed. Also, we would need reliable ways for users to correct that drift.
This is part of a series on the Ash Framework book. Previous: Why Authorization gets Messy.
Resources
Ash Framework
Learn how to use Ash Framework to build robust, maintainable Elixir applications. This book covers the principles and practices of domain-driven design with Ash's declarative approach.
Electric SQL
Sync Postgres data to clients in real-time using logical replication. Electric provides local-first sync for building responsive, collaborative applications.
Phoenix.Sync
Elixir library for integrating Electric SQL with Phoenix LiveView. Provides helpers for syncing Postgres data directly into LiveView streams.