I always knew there could be a point where something in the software development industry would change so drastically, that I would either be in danger of becoming obsolete, or I would need to relearn how to build software.
I pushed back on AI for a good while, but it won. It is so tied into how we are building software as an industry that I had to take a deeper look.
It really came from a conversation with some friends. I realized I was behind. It felt as if everyone except for me was using AI in some way. Some way too much, in my opinion, and others more selectively.
I decided that I was already behind and needed to spend the time relearning how to make software in 2026.
My day job does have a Claude subscription, but it’s usually out of tokens by the time I get to it. So I decided to really dig into Codex. Partly because it’s free for the time being. I wanted to be able to use it as I wish, and then worry about the cost.
I like that Codex isn’t really an editor. At first I didn’t like the fact that it just made changes to my repo. But if you’re branch is clean, using git diff is a great way to see what it’s doing, and just discard or refactor what you don’t like. Years of code review turned out to be super helpful.
AI Process For Now
Turns out my standard flow for building software is a great fit for AI. I like to sit and think about the problem, sketch out some sort of design; classes, functions, I/O, etc… Then implement that design. Sometimes test first, sometimes just start hacking things if I’m unsure how things might work out.
So now, I do that same thing. But once I get to the implementation stage, I plug what I am trying to do into Codex. Since it has context of the project, and a smaller scope to think about, it usually has good results.
I’ve gotten better at telling it to build a plan so I can see what it’s choosing to do. Then alter and implement once I’m happy.
So for now, I keep a text file open all day just for prompts. I like to write them up in a text file and make sure it’s super specific before going to Codex.
I still can’t get anything to create an entire app by me telling it what to build. But going to a hack night Thursday and maybe I can learn that. (I haven’t been to a hack night in years.)
I’m not really a typical software developer when it comes to AI agents and writing software with them. I don’t really subscribe to the exercise where you just state what you want and it spits it out for you. My results are always substandard.
I know AI is the future of software development. And the art of writing code is going away. I just haven’t been able to find a lane that I truly enjoy.
But…
I do enjoy plugging in small things that could be helpful, and for me to build, would require some docs reading and experimenting. I find that plugging that into an agent solves two things.
- I get the tool I’m after (after some slight revisions)
- I learn a bit about the thing I would have to lookup
I work in a large codebase and often I end up in a portion I don’t have experience with. I can grep through the code base for things I need fine enough. I thought it might be more helpful to search through git history to get a better view of whatever that thing is via commits.
I wasn’t exactly sure what that syntax was, so I threw it in ChatGPT as:
I need a bash alias to search git history for a string
The thing I settled on after some back and forth about colors and such is:
gsearch() {
git -c color.ui=always log \
--all \
-S"$1" \
--pretty=format:'%C(yellow)%h%Creset %C(cyan)%ad%Creset %C(green)%an%Creset %C(auto)%s%Creset' \
--date=short \
--name-only
}
Use with gsearch "some_string"
I could have gotten there, but not as fast as ChatGPT did. The agent thing is growing on me little by little.
In the past, hiring developers has had some terrible practices. A few that come to mind are whiteboard tests, complex take home exercises, and more recently, timed programming exercises in an unfamiliar environment.
Now that AI is here and finally doing some of the things we feared or anticipated it might, we can start to see what this might do to the hiring process.
I have to admit. When I set out to write this, I thought it would all be positive. But I’m not sure that’s the case as I start to think through it.
First the good.
AI will all but eliminate take home programming exercises. It’s way too easy to tell ChatGPT to build you an app that you describe to it. Then just update parts to make it unique. I wouldn’t do this. But many developers will. I guess it depends on how desperate you are. The work outlook isn’t great, and if you need a gig, you do what you have to do to find one.
So any type of this should take a couple of hours, but in reality takes two to three times that, will probably die.
This opens the door for my preferred type of interview. An actual conversation. An interview where the interviewer and the candidate just talk about software. The candidate gets hypothetical (or real) problems and has the opportunity to talk through how they would solve it, similar things they have build in the past, and ask some clarifying questions.
The most recent gigs that I enjoyed the most contained zero coding exercises. With one, the director of engineering simply said that he’s seen what he needed to. He could tell by talking to me that I could do the work. And that’s true. I’ve also done my share of hiring. And I know in the conversation if they know what they are talking about.
An example: Ask a candidate about some project on their resume. Ask them what issues they ran into and how they solved it. When they state the problem they had, start digging into how they approached a solution and different things that they tried.
If that candidate had a core part of the project and was actively making decision to push the project forward, they will be able to talk through it with little effort.
On the other hand, if they weren’t deeply involved in the process, they will have a really hard time talking through the solutions. And the reason is that they were not a core part of that project. It’s on their resume, but not real experience.
When a developer leans too heavily on AI, they don’t understand the problem/solution. Therefore they can’t speak in depth on the solution.
This all assumes you are looking for a senior. If not, then other questions like what did you learn from the more senior developers on this project would get you what you need. If a less experienced developer can’t tell you what they learned on a project, pass.
I have this project that I’ve been chipping away at for years. I forget about it for long periods of time, and then decide to see where I left off.
It’s a markdown library in Elixir.
I’m at the point now where most things work and I’m throwing chunks of text at it to see if it converts it correctly.
Naturally that exposes some bugs around things I missed. The trick is that because I have these long stretches where I don’t work on the project, I forget things.
I had a bug in my logic that built hyperlinks. I hadn’t considered a hyphen in the link text, so the regex didn’t match and the conversion was ignored.
The regex before the bug fix
~r/\[(?<text>[\w\.?\s]+)\]\((?<url>http[s]?\:\/\/\w+[\.\w+]+\w+)\)/i
I didn’t remember what this did, and didn’t really want to walk through step by step. Decided to see what ChatGPT had to say
I asked ChatGPT explain ~r/\[(?<text>[\w\.?\s]+)\]\((?<url>http[s]?\:\/\/\w+[\.\w+]+\w+)\)/i
This was the response
This is a regular expression written in Ruby (or a similar language) that captures Markdown-style links in a string. Let’s break it down:
Pattern Breakdown
~r/.../i
\[(?<text>[\w\.?\s]+)\]
-
\[ → Matches a literal opening square bracket ([).
-
(?<text>[\w\.?\s]+) → Named capture group (text):
-
\] → Matches a literal closing square bracket (]).
\((?<url>http[s]?\:\/\/\w+[\.\w+]+\w+)\)
-
\( → Matches a literal opening parenthesis (().
-
(?<url>http[s]?\:\/\/\w+[\.\w+]+\w+) → Named capture group (url):
-
http[s]? → Matches “http” or “https”.
-
\:\/\/ → Matches :// (the standard separator in URLs).
-
\w+ → Matches one or more word characters (e.g., example).
-
[\.\w+]+ → Matches:
-
\w+ → Ensures the URL ends in a valid TLD (e.g., .com, .net).
-
\) → Matches a literal closing parenthesis ()).
Example Matches
| Input String |
text Capture |
url Capture |
[Google](https://google.com) |
"Google" |
"https://google.com" |
[My Site](http://example.org) |
"My Site" |
"http://example.org" |
Summary
This regex identifies Markdown-style links and extracts:
Let me know if you need modifications or further explanations! 🚀
The biggest hurdle was that I don’t use named capture groups that often, and frankly, I was lazy and didn’t want to dig into how that worked.
In a matter of probably seconds, I got the info I needed, and knew what the fix was. Simply add \- to the first optional group.
In my opinion, this the perfect use of AI for a developer. I knew the information, I was just looking up what I needed to be reminded of. Sort of the new way to look up docs. In fact, I use it for that all the time.
If I didn’t understand how to build a regex for extracting hyperlinks from markdown, and used ChatGPT, that’s a problem. That level of copy and paste is as bad as it’s always been. Maybe worse now because ChatGPT has return pretty complex solutions with very little work.
I find myself in a Rails project again. And this particular Rails project uses MongoDB. The logging from MongoDB is excessive, maybe it’s Mongoid, I don’t know. But it’s extremely excessive.
For every one event in the app, I might get 5 lines of Rails output; the router, some controller stuff, and views rendered. I get 47 lines of Mongo logs. I counted.
I think we can agree that’s insane. I tried everything I could find. I updated the Mongo initializer, the Mongo config, tried updating the development config, and even looked in the Dockerfile. I even tried setting it to report only errors, which is kinda crazy. Nothing worked.
In the end, just regular old unix tools did the job,
My solution is just to filter with tail and grep
tail -f log/development.log | grep -v "MONGO"
Much more sane and I can get to the full insanity log level if I choose.
As 2021 starts to come to a close, and I take the week of Christmas off, I have plenty of time to reflect. All things considered, 2021 was a good year. I celebrated my 25th wedding anniversary (I could probably stop there. That’s a feat that can’t be topped), I watched my kids continue to grow and succeed, and I had a ton of fun making and playing music.
2022 is already shaping up to be awesome. There is some stuff in the works as far as my job is concerned (I can’t talk about that yet, but hopefully soon). And I think I can take my music hobby up to the next level, and that’s super exciting for me.
This year one of my main goals was that I wanted to play guitar on someone’s record. I didn’t get that done. But I did grow in the area of recording and composing. I record somewhat regularly. I’ve learned how to use a DAW fairly well. My mixing skills need work, but I can record something and mix it so it sounds decent. I also write music a fair amount, but I don’t finish much. The flip side of that is that I’ve become pretty good at creating parts and layering parts for music.
If you know me, you know I love analyzing things. After years of studying the theory behind music, I’m now able to analyze a piece of music, determine it’s purpose, and hopefully add something to it that helps it communicate that message a little better. And I LOVE it.
This year I also considered starting something in music. I don’t know what that is. A YouTube channel, a music site, or maybe something else. I did create a not so secret Instagram account for guitar stuff (scottradcliffguitar). It’s fun when I put stuff there, and I’ve slowly become better, but it’s temporary.I imagine 2022 will be more of that. But I think there is a sweet spot in skills of software engineering and musician that could be really interesting.
But overall what I really want is more recording. My goal for 2022 remains the same. Playing on someone’s record. But it also includes releasing some stuff. I don’t sing, so it’ll be instrumental, but still fun for me.
I’m grateful and looking forward to what is next. Here is to a great 2022!
This is the fifth post in a series about building an Elixir library. You can see the other posts here:
And again, I haven’t touched this project in a long time. Revisiting it, and checking the README, it looks like Blockquotes are up next. Should be pretty simple. I opted for just one level of blockquote for now.
Initially I thought that I would need another module like Hyperlink, Bold, or Italics. Turns out I didn’t need a new module. It’s really simple. One function does everything.
def create_blockquote(text) do
replaced = String.replace_prefix(text, "> ", "<blockquote>")
|> String.replace_suffix("", "</blockquote>")
"<p>#{replaced}</p>"
end
Rather than doing some regex capture and then replace, I opted to just replace the beginning and end of the string if it starts with “> “. Looking at this now, I may need to update to also accept a “>” without the additional space. I’m not sure. I’ll look up some markdown docs to see if the space is typically required.
This did require one more change. I needed to update the parse method to react to a line that starts with “>”.
defp parse(text) do
cond do
String.match?(text, ~r/^\#/) -> create_heading(text)
String.match?(text, ~r/^[[:alpha:]]/) -> create_paragaph(text)
String.match?(text, ~r/^>/) -> create_blockquote(text)
end
end
That last line directs to the blockquote code.
While throwing data in the module to check the results, I noticed a bug. If a string has multiple newlines, it doesn’t split properly and breaks the whole thing. So I updated the split function to use a regex that looks for any number of newlines.
def generate(text) do
parsed = Enum.map(String.split(text, ~r/(\n)+/), fn x ->
String.trim(x)
|> parse
end)
{:ok, Enum.join(parsed)}
end
Next up is code formatting. I expect this to take awhile. I see a good amount of manipulation there.
This is the fourth post in a what is a series about building an Elixir library. You can see the other posts here:
Just as I expected, with most of the work done when parsing links was built, italics was pretty simple. I have some duplication that I would like to remove, but it’s not that important yet, and I’m cautious to add abstractions without a solid reason.
What I ended up doing it adding a pipe to create_paragraph and returning the last part of that Tuple. Maybe I could do something better here, but I don’t hate it.
defp create_paragaph(text) do
"<p>#{elem(Hyperlink.convert(text),1)}</p>"
|> Italics.convert
|> elem(1)
end
And for the italics work, I created a new module and defined very similar methods from the Hyperlink module.
defmodule Italics do
def convert(string) do
text = string
|> capture_sections
|> build_emphasis
|> replace_emphasis(string)
{:ok, text}
end
defp replace_emphasis([head | tail], text) do
replace_emphasis(tail, String.replace(text, matcher(), head, global: false))
end
defp replace_emphasis([], string) do
string
end
defp capture_sections(string) do
Regex.scan(matcher(), string, global: true)
end
defp build_emphasis(captures) do
Enum.map(captures, fn x ->
"<em>#{Enum.at(x, 1)}</em>"
end)
end
defp matcher do
~r/_(?<text>[a-zA-Z0-9\s]+)_/
end
end
If you’ve read the parsing links article, this will look really familiar. The methods almost have the same names, and they share similar responsibilities. The one addition here is the addition of a matcher function to hold that Regex for me. I got tired of forgetting to update the second place that used the same regex.
Looking at this, I can see where I could extract it. But right now it just feels like premature extraction. I like opening this file and seeing everything Italics does right in front of me.
And lastly, the tests are really simple.
test "italicizes text" do
assert Bargain.generate("there is some _text_ here") == {:ok, "<p>there is some <em>text</em> here</p>"}
assert Bargain.generate("there is some _text_ here and _here too_") == {:ok, "<p>there is some <em>text</em> here and <em>here too</em></p>"}
assert Bargain.generate("there _is some text here and here too_") == {:ok, "<p>there <em>is some text here and here too</em></p>"}
end
I just realized that the Italics module doesn’t have unit tests, but there aren’t really necessary. That logic is tested well enough.
Next up, bold text. I expect the same sort of path. Pretty simple.
This is the third post in a what is a series about building an Elixir library. You can see the other posts here:
It’s been a minute since I’ve posted one of these updates. It’s due to a mixture of other things getting in the way and struggles with recursion. I thought I understood recursion in Elixir, but apparently not. I learned recursion so long ago that I’ve forgotten most of it. At any rate, I got it, and I think the lesson I learned is that head | tail is the best approach trying to iterate the same thing multiple times.
The problem is pretty simple in theory. Look for any part of a string that begins with [] and ends with (), extract the contents, build a hyperlink with the contents of () as the url, and the contents of [] as the link text. Then replace the []() part with the actual hyperlink. Do this globally.
The trick here is immutability. The string that you are updating must be new every time. I’ll fix that later too.
I’ll just present my working solution.
In, my main Bargain module, I updated create_paragraph to call out to a new module I made called Hyperlink.
defp create_paragaph(text) do
"<p>#{Hyperlink.convert(text)}</p>"
end
And here’s the entire Hyperlink module
defmodule Hyperlink do
def convert(string) do
links = capture_link_segments(string)
|> build_link
replace_link(links, string)
end
defp replace_link([head | tail], text) do
replace_link(tail, String.replace(text, ~r/\[\w+\]\(http:\/\/\w+\.com\)/, head, global: false))
end
defp replace_link([], string) do
string
end
def capture_link_segments(markdown) do
Regex.scan(~r/\[(?<text>\w+)\]\((?<url>http\:\/\/\w+\.\w+)\)/, markdown)
end
defp build_link(captures) do
Enum.map(captures, fn x ->
"<a href='#{Enum.at(x, 2)}'>#{Enum.at(x, 1)}</a>"
end)
end
end
There is some interesting stuff, so I’ll unpack this a bit.
I will start with convert. That will take our markdown string, pull out the segments (the parts in [] and ()), build links for those into a list, and then replace them all before returning the string. Looking at this now, it should return a tuple {:ok, string}. I’m a believer that all public functions should return tuples, but I’ll do that later.
The capture_link_segments function will return a list of of those captures. More precisely, a list of lists. Given the string “This is a link and another“, it would return:
[
["[link](http://google.com)", "link", "http://google.com"],
["[another](http://google.com)", "another", "http://google.com"]
]
And I just found a bug. That regex doesn’t handle multiple words in the link text.
Then on to build the link with build_link. Pretty simple here. Map over the list and create and actual HTML link.
Now the interesting part, and the part that gave me the most trouble. Recursion.
The tricky part is the string of text. We need to manipulate this, but Elixir likes immutability. I went through lots of very messy iterations of this. I won’t list them here, but they are in the commits on GitHub. What I finally had to do is drop back and review recursion in Elixir and write recursion code outside of this project, so I could focus on the solution. Dave Thomas explains this really well in Programming Elixir, a great resource. I went back to that book for review.
Turns out the solution is pretty simple. Elixir has a way of making complicated things simple, but you first need to understand the complicated thing.
By using head | tail I was able to constantly iterate of the collection until it was empty. And by passing in a string as the second parameter, I was able to constantly build a new string until I was done. The meat of the recursion is in the first replace_link([head | tail], text) function. The match to just return happens in replace_link([], string).
As long as the first parameter has something in the list, it will fire the string replacement operation. If it’s empty, it just returns the string. It took me days to get here. Those days are just a couple of hours here and there, but still, a very long time. Hopefully, I’ll remember this pattern going forward.
Next up is fixing the bugs I found, then italics, bold, etc… I should be able to reuse some of this recursion logic to complete that.
Update: The fix for multiple words was simple. I needed to update the Regex to accept word boundary or whitespace. ~r/\[(?<text>[\w\s]+)\]\((?<url>http\:\/\/\w+\.\w+)\)/. I still need to address the url part of th regex
I work on an app that logs a bunch of data. Typically, I wouldn’t test logging, but we have some issues where bad pattern matching is causing some 500 errors because it can’t handle the responses we are getting.
After some poking around, I ended up on a cool way in ExUnit to test for specific log messages. But there is one major issue that tripped me u[]
The module is ExUnit.CaptureLog and allows you to pass in a function and check that the log has the message you expect.
On the surface, it’s pretty simple. assert capture_log(fn -> Logger.error(msg) end) =~ msg That will assert that Logger.error(msg) actually logs msg.
Just replace Logger.error(msg) with the function that performs the logging and your all set. Mine looks like this because I know if it got to logging something that contains webhook_received, the code executed properly: assert capture_log(fn -> Persistence.MessageQueue.log_message(event_data) end) =~ "webhook_received"
Unfortunately, I was getting a success message when it clearly wasn’t passing. I really don’t know why. What I do know is that the log setting in config directly effects this, and defaults to :warn. By setting that to :info, messages start to flow through when testing, and the assertions start to actually pass/fail accurately.
You may have already noticed that setting the log messages to info makes for a really noisy test suite. But, there is a way around that also.
I haven’t gotten to it yet, but @tag :capture_log will allow you to define which tests should report logs and which ones shouldn’t.
This forum thread was super helpful.