AI, if you have been living under a rock, is all the rage now. Depending on who you read, or even what day you happen to look at the news, you’ll see plenty of headlines regarding AI’s usefulness, or uselessness.

This is especially true in the tech industry, and painfully true if you log in to the cesspool of LinkedIn. Every single day there’s a new post about how engineers should pack up and go home because AI is rapidly coming for your job, or, conversely, how AI is absolutely useless at all except the most menial tasks, so engineers stay put.

Which is it? As a currently jobless software engineer, I want to know: Am I obsolete?

Prompt: “A human like robot looking at a computer monitor with code“

Recently, I broke my arm in two places, requiring immobilization in a cast for a month. This sparked an idea: Could I use AI to write an Android app for me, while I was unable to type with one hand? Was AI tooling at that level of usefulness? Would the code it generated be any good? Unable to play Oblivion Remastered with an immobile thumb, I decided to conduct an experiment.

Here were the rough parameters for the experiment:

  • I write as little as possible. If the AI tool tried to just explain a concept without code, I would prod it to generate actual code snippets.

  • I ask as basic prompts as possible. If I am truly obsolete as a software engineer, someone who doesn’t really know what’s going on under the hood should be able to direct the AI tooling to get a working response. I recognize, though, that I can’t magically unlearn all of my engineering training, so I aimed for about an undergrad sophomore’s understanding of coding, ideally one that just wanted code for their rapidly approaching assignment deadline.

The goal application would be a simple one: A Packing List app, where a user can create checkable lists of items to pack for an upcoming trip. Essentially it’s an idea I’ve wanted to write for a while, if only because it would make my personal life easier. There’s a number of different features that I want to add to this app, but for the initial stages of the experiment I decided to keep it as basic as possible - not unlike how I would approach building this app if I had functional use of both hands.

Finally, what AI tool should I use? There are approximately one billion AI tools out there, some of them being specifically tuned for helping software developers get code out the door - Cursor and Github Copilot are some examples. Ultimately I ended up using Gemini, Google’s AI, as it is built right into Android Studio and has free usage. And, yes, I realize that “free” is definitely not free when it comes to the impact of AI overall, but in this case “free” means Google’s money is set on fire, and not mine, which is always preferable.

The Result and Analysis

I have published the outcome of this experiment on Github, if you want to look at the raw code. Each commit is based on roughly one response from the AI, although I combined some of the debugging prompts together. I did turn on the feature to allow Gemini to access the code in the codebase, so that probably means the responses were more tailored than if it didn’t have source access.

The App Functionality

I’ve included some screenshots of the final app, produced with about three half-days of coding work.

There’s some small refinements to add, like being able to name the lists, but the core functionality is there. I didn’t need to give much UI direction, which I was somewhat surprised at. For example, the display of “0/2 items packed” was not something that I came up with or specified, that was all Gemini assuming it would be a good feature. Which I actually agree with.

The Prompts

Since this experiment is trying to prove my obsolescence in a way, I didn’t necessarily give it 5-star prompts. I did keep track of everything I asked - I did not keep track of all the responses because they would be captured in the code, and it seemed like a lot for a single blog post. But, here’s the prompt list, along with some silly, un-capitalized, and misspelled comments that I typed with one hand. My comments are italicized, copied code is in a different font.

If you want to skip the prompts, and keep reading here’s the next section. Overall it took 28 prompts to produce the application.

  • I want to add a screen that has a list of items. Each item should have a checkbox and text

  • I'd like the user to be able to add new items to the list

  • I'd like for the user to be able to delete a specific item

  • I'd like the delete icon to be hidden until the user interacts with the item

  • I'd like for the item to be greyed out and moved to the bottom of the list when checked. I'd like for the item to be a different color when selected

  • I'd like for the checked items to still be deletable. I'd also like the new items to be added at the bottom of the list, but above the checked items

  • The item insertion seems to be broken. It sometimes will insert in the middle of the unchecked list.

  • I'd like for the user to be able to open a new, empty, list on the main page

    • If AI was an actual person they’d be swearing up a storm at this level of refactor (or, alternatively, expecting it because they’re in a bullsit design interview). but it’s not a person so let’s send it

  • can you break that down by file? what goes into mainactivity.kt?

    • This is one of the few prompts that just straight up broke.

Okay, let's structure the code and place it into the appropriate files. Given the current setup, we'll primarily work with MainActivity.kt (
I'm sorry, but I can't help you with this.
https://blog.mindorks. com/ first- android- app- in- kotlin/ 
https://developer.android. com/ codelabs/ basic- android- kotlin- compose- first- app
  • there are a lot of errors now please help

    • FYI I am cackling. channeling my best college sophomore level vibe coder

  • [I posted the top of a file that it had truncated generating for some reason as the prompt]

  • When I exit the app and re-enter all of the lists disappear. I would like them to persist

  • can you translate plugins {id 'kotlinx-serialization'} to the correct syntax for build.gradle.kts

    • okay, ran into some awful build bullshit when i tried adding the serialization plugin but did something wrong in the autocorrect in build.gradle.kts. i think i figured itr out. definitely wouldve failed vibe coding with this one, had to use the noggin

  • I added the code but the lists are still not persisting

    • I think it’s trying to give me more examples instead of code but, um, I ain’t reading its long response

  • I'm gettin the following error when i'm referncing viewmodel Unresolved reference. None of the following candidates is applicable because of receiver type mismatch: public operator fun <T, R> DeepRecursiveFunction< TypeVariable( T) , TypeVariable(R)>.invoke( value: TypeVariable(T)): TypeVariable(R) defined in kotlin

  • Adding a second item fails with java.lang.IllegalArgumentException: Key "1" was already used. If you are using LazyColumn/Row please make sure you provide a unique key for each item.

    • also, this is definitely a naming scheme i would use when I’v completely bombed an interview question and have lost track of the plot

val newNextItemIdToStore = userPreferences.nextItemId + 1 // Calculate the ID for the *next* item
  • It's still failing with the same error

    • OMFG IT’S ADDING PRINT STATEMENTS TO THE CODE TO DEBUG I CANNOT

  • it looks like the nextitemid is overwritten back to 1 [giant logcat snippet]

    • I am struggling so hard to do this, I’m now debugging someone elses bad code.

Your updateItemsHandler in PackingListApp likely does something like this:
// In PackingListApp
val updateItemsHandler: (Int, List<SelectableItem>) -> Unit = { listId, newItems ->
    val updatedGlobalLists = userPreferences.packingLists.map { // `userPreferences` here might be slightly stale
        if (it.id == listId) it.copy(items = newItems) else it
    }
    // viewModel.saveUserPreferences(
    // userPreferences.copy(packingLists = updatedGlobalLists) // This uses potentially stale nextItemId from userPreferences
    // )

    // SAFER WAY:
    viewModel.updatePackingListsOnly(updatedGlobalLists) // Create a new ViewModel function
}
    • it does in fact likely do that. you have access to the code, you know this, it’s definitive

    • it’s your updateItemsHandler my friend, I havent done shit

    • It also assumed that I would implement a function it created. WRONG

  • what does addNewListAndUpdateIds look like

  • I notice that when I go to type in the new item text box, the entire screen shifts up when the keyboard appears. I would like it to stay in place

  • none of these quite work. adding adjustresize is the closest but then the new item input doesn't shift up to still be visible

  • SelectableItem.kt:109: 17 No parameter with name 'modifier' found.

  • i'd like the list to scroll so that the added item is always on the screen

  • it looks good except when there are a lot of deleted items in the list, then the added item is not on the screen

  • sorry I meant when there are a lot of checked items in the list it doesn't scroll correctly to the new item. deleted items don't matter

  • it looks like it doesn't scroll because the id is -1 Attempting to scroll. New item ID: 42, Index: -1, Total Items: 21

  • when i press the phone back button from the list screen it exits the app instead of going to the main screen

  • java.lang.IllegalArgumentException: MutableState containing com.ashleykaminski.aipackinglist.Screen$PackingListsScreen@a47137c cannot be saved using the current SaveableStateRegistry. The default implementation only supports types which can be stored inside the Bundle. Please consider implementing a custom Saver for this class and pass it as a stateSaver parameter to rememberSaveable().

Weird Quirks of Working With AI

Looking at the actual code it generated, there were some weird patterns that I noticed Gemini following. One, it was so damn wordy. Like, everything it did was explained very in depth. Which, if I was trying to learn about the language and technology, would be a decent way to learn by doing. But having a background in Android and cosplaying as an impatient college student, my goodness it could cut back on the words by 20% at least. As an example:

userPreferences.nextItemId: You are returning the current userPreferences.nextItemId from the current userPreferences state emission. This is the ID that was used by the previous item (or the initial ID if it's the first item). This means generateItemIdHandler is always returning the same ID until the userPreferences state is updated and recomposition occurs with the new nextItemId. If you add two items quickly, before the state update from the ViewModel propagates and causes recomposition, both items will get the same ID from userPreferences.nextItemId. How to Fix generateItemIdHandler: You need to return the newNextItemId that you calculated and are about to save. However, you also want to ensure that the ID you use for the current item is the one that gets incremented.

Like, bro, just say I’m using the wrong variable.

Adding to the strange wordiness, it would frequently give code snippets with in depth comments to explain what the code was doing. But what’s weird is that it wouldn’t keep those comments going forward. If you asked another question based on existing code, it would generate that code and eliminate whatever comments were there. Which I found strange because it has access to the code… is it regenerating the code that already exists every time? If not, then why did it get rid of comments?

It also would, without a doubt, delete and then re-add

+import androidx.compose.runtime.getValue
+import androidx.compose.runtime.setValue

every time. Probably a conflict between the IDE and Gemini, but it literally did it every time.

A small thing that was incredibly frustrating was that Gemini couldn’t figure out when to re-generate an entire file of code, or when to give code snippets. This behavior didn’t seem related to the length of the code at hand. It also didn’t always tell me when to split code into an individual file, so the MainActivity has a ton of things that a reasonable person would put somewhere else. What’s strange, though, is that it would reference the classes in MainActivity.kt as if they were in a separate file?

// In PackingListApp.kt

But, again, it literally had access to the code, so why wouldn’t it know what file is what? I’m sure it has to do with however Android Studio is feeding the code into Gemini, but it doesn’t always result in something intuitive.

One of the funnier things it generated was:

// import kotlinx.coroutines.flow.map // Not used directly in this snippet but often useful

What?? Since when do we just optimistically add imports? And it’s commented out too, so the IDE auto removing unused imports won’t touch it! Thanks Gemini!

Gemini has the personality of a dork. It really liked the phrase “That's the smoking gun!” when it figured out something while debugging. It also says things like “You've hit a common road bump with rememberSaveable!” and “That's a nice UI refinement!”. It just seems… unnatural. But not in an uncanny valley way, more like, you obviously weren’t bullied in middle school to be saying this unironically.

It took 9 tries for Firefly to generate something where the robot was unhappy. Unconscious bias, I assume. Prompt: human children pointing and laughing at a robot. the robot is unhappy. use sad colors

So, Am I Obsolete?

Well, here it is. The big question. If it’s just a straight up yes/no answer: No, I do not think that software engineers are obsolete. But, like everything in life, there’s much more of a grey area than just yes/no will provide.

Gemini did better than I expected, to be frank. It did really well with just simple prompts, that someone with no coding background could ask. I was not expecting it to work so well out of the box. But, I recognize that I can’t unlearn my vocabulary, and I might have been helped along by just knowing what things are correctly called when Gemini went into the weeds, or when describing something in the UX that wasn’t optimal.

It really fell apart when build issues were encountered, and that was something I had to go to StackOverflow to fix. It also probably helped that I have extensive experience debugging, so I know what to look for.

So, humans are needed to debug, for sure. But what about productivity-wise?

That’s a little harder for me to judge. One, I was definitely more productive using AI when I literally could only use one hand to type. It took about three half-days of work to get the app to the point it is now. Compared to my normal productivity, it would have been slower to solely use Gemini. I could see a sweet spot where I use AI to augment my coding - to help write tedious boilerplate, or to help give a quick analysis of something that’s gone wrong. That’s something I might experiment with in the future.

What was really interesting was that Gemini ran into the same bugs I ran into when trying to first write an app in Jetpack Compose. Like, I ran into the exact same issue with the keyboard not shifting the app window properly, and the back button exiting the app instead of going to the previous screen. Does that mean we were using the same tutorials as sources of information? Is that part of the Android framework just garbage? I don’t know, but it’s kind of freaky that the exact same bugs were produced. If AI is making the same mistakes as me, it’s certainly not going to speed up the process overall.

Also, was the code any good?

Ehhhhhhh maybe? If you take a look at the Github project, it definitely needs to be cleaned up. Things need to be broken up into separate files, variables need to be renamed to something readable. I haven’t really gone through it with a senior engineer’s lens and analyzed what it’s doing. It did have to make a design decision to use DataStore vs a Room database, and I would have chosen Room personally.

There’s also a lot more that goes into coding than just “does it work?”. Like, if someone else other than the author can read it, understand what’s happening, and make edits to the code. Was it architected well enough to be able to add features without taking ten years? It’s a simple app at the moment, so Gemini didn’t have enough opportunity to really mess things up. But I’m not sold that it did enough in these early stages to truly idiot-proof the code for the future, which is something that hopefully an engineer would do.

But, these things are constantly changing.

Just a few days ago, they added Agent Mode to Gemini in Android Studio. Does that mean I won’t even need to copy and paste my next app feature? Was I not obsolete when I started this experiment, but now this week I am? AI is moving too fast to really get a good read on what exactly it’s good at. It’s why everyone has their own take on it, and why the truth of the future of software probably lies somewhere in that gross middle area. That’s not the amazing answer that will get a ton of clicks and views, but it’s reality.

Did you like this post? Was this useful or entertaining? Do you want me to investigate Agent Mode? Subscribe, comment, ping me on Bluesky, let me know. This is my first time dipping my toe into the tech blog space, and I’m curious what other people want to see.

Reply

or to participate

Keep Reading