AI for the rest of us

Last week, I gave a talk to a group of 20 non-software professionals, from real estate to law to forestry, about AI. Big thanks to Kyle Yoshida for hosting.

Rather than a traditional talk, I wanted it to feel like a real discussion: how folks are already using AI today, and what current capabilities are worth knowing to improve workflows. Since nobody in the room writes code for a living, I kept things jargon-free and focused on what’s practical. I didn’t bother splitting hairs between AI, LLMs, and machine learning - “AI” works well enough for most people.

In this post, I’m sharing a few notes from that conversation and the talk I gave on where AI is today.

(I’m sitting next to the TV presenting)

Discussions

Here were some of the things I learned from the group discussion:

  • There is effectively a 100% adoption rate of AI, mostly ChatGPT, among those in high school and university.
  • The young generation is having a tough time - not only did addictive social media reduce attention spans, especially for young men, but AI is reducing critical thinking.
  • Most people do not pay for premium AI features, understandably sticking with the free plan of ChatGPT. For those in the tech bubble, it’s important to understand that the free tier is the baseline for most people.
  • There isn’t yet a lot of concern that AI will take anybody’s jobs. However, there is an understanding that things will be harder for young professionals just entering the market.
  • Most people have used ChatGPT to write emails or rewrite long-form writing before. Most people’s usages of AI have not gone beyond that level.

AI capabilities

After our initial discussions, I covered some capabilities of AI that are often not obvious to those in the technology ecosystem. The following sections will act as a simple guide/tutorial on getting started with them along with examples.

  1. Prompting can get you further than you think
  2. AI can browse the internet for you
  3. AI can build tools for you
  4. AI can do data analysis for you
  5. Quick models vs thinking models

Capability 1: Prompting can get you further than you think

At this point, most folks have used AI to write an email.

Prompt:

Write me an email to my team member following up on our meeting from last Friday

Output:

Subject: Follow-Up on Our Meeting from Last Friday

Dear [Recipient's Name],

I hope you're doing well. I wanted to follow up on our meeting from last Friday and thank you again for taking the time to meet with me. It was great discussing [briefly mention key topic(s) discussed, e.g., the upcoming project, potential partnership, etc.].

As we talked about, [mention any specific action items or next steps you agreed upon during the meeting, if applicable]. I’m looking forward to moving forward with [next steps] and wanted to check in to see if you had any additional thoughts or updates on the matter.

Please let me know if there's anything further you need from my side, or if you'd like to set up another meeting to discuss anything in more detail.

Thank you again, and I look forward to hearing from you.

Best regards,
[Your Full Name]
[Your Position]
[Your Contact Information]

It’s okay, but a lot of folks have a sixth sense about what AI content looks like. And this email definitely triggers all of the warnings.

  • Overly verbose
  • Generic
  • Too nice

But most folks stop here. Providing context in your prompt can go a long way:

Prompt:

Email example 1:
hey I need this your status report pronto, where the hell is it. call me

Email example 2:
sup jon how are you doing. it was great meeting you last week, we need to meet again next week. let's do it. here's my number: xxx-xxx-xxxx

Email example 3:
Hello mr ceo, thanks for taking the time to meet our team last week. We're thrilled to work with your team on the next government contract. let's stay in touch.

First, analyze and explain my own email style.

Then, using that email style, generate an email for the following, staying as true to my example as possible:

Write me an email to my team member following up on our meeting from last Friday

Output:

hey, following up on our meeting last friday. where are we on that? need an update. let me know.

Now, it’s hard to tell that this email was AI generated, while maintaining your own personal tone of voice.

It will be worth your time to build your own personal “prompt context” library. These will include information about you (as much as you are comfortable providing) in addition to writing samples for the AI to assess your writing style. Even with a simple single-file text library, you will find copy/pasting into your chat windows quick and easy, and outputs catered exactly to your needs.

Capability 2: AI can browse the internet for you

If you’ve used older AI models before, you may recall that they often gave outdated information with something called the “knowledge cutoff”. While models have gotten better at bringing the knowledge dates forward, it is not practical for AI to update themselves all the time.

But having the latest knowledge is important! Nowadays, AI can browse the internet for you, similar to how humans do it.

One of the most popular platforms for internet-browsing AI is Perplexity. It is free to use, extremely capable, and supports additional “thinking” modes that we’ll cover in the next section.

Here is an example of Perplexity browsing for the latest concrete construction costs in Metro Vancouver:

Prompt:

What are the latest concrete structure construction cost statistics in Metro Vancouver? Broken down by hard costs and soft costs.

But it’s not just Perplexity. Even OpenAI’s ChatGPT can browse for you. In fact, for most large AI platforms, if they don’t automatically perform an internet search, there is often a “Search” button somewhere under the text input to force the behavior.

The great thing about these models is that they do their best to add citations to each fact. We’ve all heard of “hallucinations” where AI makes things up based on their training data. These strategies greatly reduce the likelihood of that happening.

Give it a try! They are more capable than you think, and can save hours of time for internet-research-heavy tasks. At minimum, they will browse faster and synthesize information than any human you know.

Capability 3: AI can build tools for you

We’ve all run into times where we need a quick tool on the browser to get things done. It might be a stopwatch (I remember our university professors back in the day using free online timers with ads during exams). It might be a word counter. A unit converter. A password generator. You get the idea.

Let’s try out Claude and see what it can do for us:

Prompt:

Build me a stopwatch where with a big button in the middle of the screen to start it, and the screen increasingly turns orange as it approaches 10 seconds, finally flashing red when 10 seconds is hit.

Not only are these fun to build, but they’re infinitely customizable via natural language (“make it blue”, “move the button to the left”, “set the timer to 30 seconds”, “ask the user for how much time they want first”, etc.) and are sometimes shareable as public URLs.

You don’t need to be a developer to have code work for you. Of course, you will need some technical capabilities to move this functionality out of the LLM UI and into something more production-ready, but don’t let that deter you from getting the value you or your teammates might need.

Capability 4: AI can do data analysis for you

Probably one of the most overlooked aspects of modern AI is that they can do data analysis for you.

Let’s analyze the following .csv of names: sample-names.csv

There are many different type of names here:

John Smith
Jane
Larry-Joe Thompson
Susan P.

Let’s say I want to split things out into first names and last names. Let’s put it into ChatGPT 4o with the following prompt:

Analyze the following csv file, and create a new sheet with first name and last name separated.

And what do we get? After a few seconds of “analyzing”, ChatGPT spits out a perfect CSV with exactly what we asked.

It’s important to understand what ChatGPT did here - rather than just relying on it’s LLM capabilities to generate the final output, instead it analyzed the data in each column first, generated Python code, then ran that code in an isolated environment to produce the output.

In fact, you you double click the “analyzed” section, you can see the code that was run. It may look something like this:

# Clean and process the data

# Read CSV with flexible parsing
df = pd.read_csv(file_path, error_bad_lines=False, skipinitialspace=True)

# Ensure 'name' column exists
if 'name' in df.columns:
    # Split 'name' into 'first_name' and 'last_name'
    df[['first_name', 'last_name']] = df['name'].str.extract(r'(\S+)\s+(.*)', expand=True)

    # Create a new sheet with separated names
    output_path = "/mnt/data/cleaned_names.csv"
    df.to_csv(output_path, index=False)

    # Display cleaned data
    import ace_tools as tools
    tools.display_dataframe_to_user(name="Cleaned Names Data", dataframe=df[['first_name', 'last_name']])
else:
    output_path = None

output_path

Most people don’t know that some LLMs can do this, and it can be a huge productivity boost for those workign with large datasets. Using LLMs to generate final output data is not reliable (especially for big datasets), but using LLMs to generate code then running the code is often reliable, with full code outputs to help you debug further if needed.

We can ask far more complex questions, such as:

  • Break down the number of people living in each city/state
  • Produce a graph of the number of names with last names vs those without last names

If you’d like to try more complicated datasets, try this sample dataset of home sales in Richmond, BC:

  • How many home sales are there in this dataset?
  • What is the average price per sqft of homes sold?
  • Break down the number of sales by the first 3 digits of the 6 digit postal code, and present as a graph
  • Group up the total sale price of homes by buckets of $100,000, count them, and present as a graph

Capability 5: “Quick” models vs “Thinking” models

AI is best seen as an augmentation to your intelligence and cognitive abilities. When you go through your day, there are tasks that require different levels of thinking:

  • Low thinking: What should I cook for dinner today given what’s in my fridge?
  • Medium thinking: What are the key takeaways from this PDF report?
  • High thinking: What is the state of concrete construction costs in North America given current macro-economic factors?

One of the best ways to see thinking models in action is by checking out DeepSeek (need to enable R1 thinking mode). I encourage you to to keep your prompts simple, and see what it can do for you.

It is important to pick the right tool for the job. For practical purposes however, it’s safe to simply pick the smartest and most accessible AI you have access to at any given time.

Conclusion

My primary takeaway from all this was that we are still really early. AI adoption will be fast and slow at the same time:

  • Fast in spaces like technology, especially software development, where benefits are immediately noticeable and already widely adopted
  • Slow in most spaces outside technology, starting with low hanging fruit like writing text and summarizing PDFs.

I also see that it’s unlikely for many traditionally run businesses to ever adopt serious levels of AI that produce tangible results. More than likely, it will take a change in leadership or acquisitions by tech-enabled private equity/roll-ups to truly realize that value.

I’m no expert in AI, just a mere appreciator and effective utilizer of it. I’m glad I got a chance to open people’s minds about what it can do for people right now.