Refactoring Our Work
In the previous section, we fully implemented a feature using test-driven development. We tested from the outside-in, starting with a feature test, and we called the implementation finished when the feature test was passing. But there's one more thing we should always remember to do if we truly want to consider the work finished. We should refactor our work.
Defining refactoring
Before we dive in, it's crucial to define refactoring. An excellent place to start is the Refactoring book. Martin Fowler provides two related definitions for refactoring, one when the word is used as a noun and one when it is used as a verb:
Refactoring (noun): a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior.
Refactor (verb): to restructure software by applying a series of refactorings without changing its observable behavior.
So if we perform a refactoring (noun) — such as extract method, extract class, etc. — we are making a change to the internal structure of software without changing the observable behavior. When we refactor (verb), we apply several refactorings, but we still do not change the observable behavior.
In both cases, it is vital that we do not change the observable behavior. And that's why refactoring fits perfectly as the third step in the red-green-refactor cycle. By keeping our test passing (green), we guarantee that we can modify the structure of our code without changing its observable behavior. If our test fails (red) during the process, we have changed behavior.
Of course, this presupposes that our tests are testing behavior and not implementation — well-written tests test behavior and not the implementation for this very reason. We do not want to have to change tests solely because the implementation changes. It's also for this reason that we do not test private functions. Private functions are an implementation detail. We want to test that which a module exposes publicly.
When we refactor, it's good to ask the question, which test do we keep green? The answer depends on which part of the code we are refactoring and what behavior we want to keep unchanged. For example, if we are changing the implementation of a function for improved performance, then we just need to make sure the unit tests for that function are the ones that keep passing. But sometimes we change bigger pieces of code — perhaps how two modules interact with one another — and we might even change the name of a public function. What then? Changing the name of a module's public function will break the unit tests, and we'll have to update them. In cases like these, we are often interested in keeping the behavior unchanged from a user's point of view. So, our feature test is the one we must keep passing while we refactor.
Since we always write tests before the implementation with TDD, you can perhaps see why refactoring becomes such a joy with TDD. We always have a test that fails if we change behavior. We always have a test that can notify us if we go astray in our refactor.
Thus, refactoring is making changes — usually a series of small changes — without changing the external behavior. And that is key.
Refactoring our feature test
Taking the time to refactor at the end of the red-green-refactor cycle can often mean looking through the work we've done and considering it good enough. We do not always have to change something. But we always take the time to see if we need to. For our current feature, I don't think we'll change much.
Let's look at our changes by starting with our templates. We added the following:
<ul>
<%= for room <- @chat_rooms do %>
<li class="room"><%= room.name %></li>
<% end %>
</ul>
The code is simple, but there is something we can improve.
The li
tags have a "room"
CSS class, but we do not use that for
styling. We're using it for targeting the element in our tests.
Over time, I've learned that using CSS selectors for feature tests can cause tests to be brittle when working in a team. Designers (or other developers) will often change a CSS selector because of a change in how they express or organize CSS. And that's perfectly normal. So it comes as a shock when tests start failing simply because of changing the style of an element.
Thus, I tend to prefer using data attributes to target elements in tests. So we'll go ahead and change that. But first, run the feature test to make sure it's passing. We always want to keep the test passing while refactoring.
$ mix test test/chatter_web/features/user_visits_rooms_page_test.exs
.
Finished in 0.4 seconds
1 test, 0 failures
Good. Now we'll use the Wallaby.Query.data/3
instead of css/2
function:
# test/chatter_web/features/user_visits_rooms_page_test.exs
test "user visits rooms page to see a list of rooms", %{session: session} do
room = insert(:chat_room)
session
|> visit(Routes.chat_room_path(@endpoint, :index))
- |> assert_has(Query.css(".room", text: room.name))
+ |> assert_has(Query.data("role", "room", text: room.name))
end
In our index template, let's remove the class and add a data-role attribute:
# lib/chatter_web/templates/room/index.html.eex
<ul>
<%= for room <- @chat_rooms do %>
- <li class="room"><%= room.name %></li>
+ <li data-role="room"><%= room.name %></li>
<% end %>
</ul>
Now if we run our test again, it should still pass:
$ mix test test/chatter_web/features/user_visits_rooms_page_test.exs
.
Finished in 0.3 seconds
1 test, 0 failures
Excellent. Now that we're in this test, let's try to clean it more. If possible, we should write feature tests so that non-technical stakeholders can read them. Wallaby's domain-specific language is already very legible, but we are still expressing the test in technical terms (e.g. finding a data-role). We can improve the language to reflect our domain better.
Extract Routes.chat_room_path(@endpoint, :index)
to a private function with a
better name. This is like the extract private function refactoring but in the
test itself.
# test/chatter_web/features/user_visits_rooms_page_test.exs
session
- |> visit(Routes.chat_room_path(@endpoint, :index))
+ |> visit(rooms_index())
|> assert_has(Query.data("role", "room", text: room.name))
end
+
+ defp rooms_index, do: Routes.chat_room_path(@endpoint, :index)
Run the test. We're still green:
$ mix test test/chatter_web/features/user_visits_rooms_page_test.exs
.
Finished in 0.3 seconds
1 test, 0 failures
Let's perform a similar function extraction for the Query.data/3
portion of
the test. Here we have a good opportunity to improve the language of our test. A
stakeholder might not understand that the chat rooms' page needs to have
data-roles, but they will understand that we need to see the names of the rooms
on the page:
# test/chatter_web/features/user_visits_room_page_test.exs
session
|> visit(rooms_index())
- |> assert_has(Query.data("role", "room", text: room.name))
+ |> assert_has(room_name(room))
end
defp rooms_index, do: Routes.chat_room_path(@endpoint, :index)
+
+ defp room_name(room) do
+ Query.data("role", "room", text: room.name)
+ end
Let's run our test again to confirm it passes:
$ mix test test/chatter_web/features/user_visits_rooms_page_test.exs
.
Finished in 0.3 seconds
1 test, 0 failures
Excellent. There's one last thing about this feature test that seems odd to me. It has a discrepancy between its description and what it's actually doing in the body of the test. The description says "user visits rooms page to see a list of rooms", but our test is only testing that a single room is available. That is something I should have caught when first writing the test, but since I didn't, now is a great time to fix it.
Since the description is more accurate of what we want, let's modify the test itself. Go ahead and insert a pair of chat rooms instead of a single one, and assert that we see both of them on the page:
# test/chatter_web/features/user_visits_rooms_page_test.exs
test "user visits rooms page to see a list of rooms", %{session: session} do
- room = insert(:chat_room)
+ [room1, room2] = insert_pair(:chat_room)
session
|> visit(rooms_index())
- |> assert_has(room_name(room))
+ |> assert_has(room_name(room1))
+ |> assert_has(room_name(room2))
end
Run that feature test one more time:
$ mix test test/chatter_web/features/user_visits_rooms_page_test.exs
.
Finished in 0.5 seconds
1 test, 0 failures
Our feature test finally reads like a user story that a stakeholder can read and might have even asked for: "As a user, I want to visit the rooms page and see the list of available rooms". Excellent.
Refactoring the schema
The rest of our code seems straightforward, and I don't see much need for refactoring. But we did use a code generator for our schema, and it is always good to double-check what a code generator introduced into our codebase.
Recall that we generated our Chat.Room
schema module with Phoenix's mix
phx.gen.schema
generator. If we open our schema module in
lib/chatter/chat/room.ex
, we'll see that Phoenix automatically defined a
changeset/2
function for us:
@doc false
def changeset(room, attrs) do
room
|> cast(attrs, [:name])
|> validate_required([:name])
|> unique_constraint(:name)
end
Since we don't use this function anywhere, let's remove it for now (along with
import Ecto.Changeset
). If we need this function in the future, we can always
reintroduce it. We might even want a slightly different version of it, which our
tests will help define.
# lib/chatter/chat/room.ex
defmodule Chatter.Chat.Room do
use Ecto.Schema
- import Ecto.Changeset
schema "chat_rooms" do
field :name, :string
timestamps()
end
-
- @doc false
- def changeset(room, attrs) do
- room
- |> cast(attrs, [:name])
- |> validate_required([:name])
- |> unique_constraint(:name)
- end
end
Having removed that function, let's run all of our tests to make sure we haven't accidentally broken anything.
$ mix test
.....
Finished in 0.4 seconds
5 tests, 0 failures
Perfect! Let's commit our work and call this feature done. And congratulations. We have just finished our first full BDD cycle!
This is a work in progress.
To find out more, see TDD Phoenix - a work in progress.