How AI Can Help with Your Online Job Search

| 0

In a previous blog we discussed fundamental steps to successfully search for a job online.  To take your online job search to the next level, artificial intelligence (AI) may be the key. In this blog, we will explore AI tools to help with your job search, interview preparation, career planning, and more.

AI can help match you with the jobs that fit your skills, experiences, and preferences for location, salary, and other factors. There are several AI tools that can be helpful in matching you to positions. Two such tools are Jobright and Sonara.  Jobright scans resumes and job descriptions to calculate matches and suggest insider connections for referrals. Sonara automates the application process, submitting applications on your behalf, while continuously searching for new openings.

AI tools can also help get noticed by crafting the perfect resume and cover letter that are application tracking system friendly. Tools like Enhancv, KickResume, and Resume.io offer real-time feedback and improvement suggestions to help tailor resume and cover letters so your materials stand out.

AI tools can also help with interview preparation. FinalRound and Boterview simulate interviews by asking industry or profession-specific questions and providing feedback on your responses. Interview preparation AI tools can help you refine your communication and help build confidence for your interview.

Some of these tools go beyond helping you find your next job by helping you develop a personalized career path, identify long-term goals, and pin down steps needed to achieve them. Tools like Wobo and JobCopilot help identify skill gaps and suggest learning recommendations, and provide ongoing support through AI-powered coaching.

It is important to recognize that most of these services have a fee associated with them based on the level of service you wish to receive. To get a full breakdown of AI tools to help with job searching, resume and cover letter creation, interview preparation, and career coaching, check out JobCopilot’s blog post covering 12 AI tools.

Remember, these AI tools are to assist you in crafting your application and landing the perfect job for you. You still need to make sure you use these tools wisely in your search.

Here are a few things from ChiefJobs blog on the 7 Best Practices for Using AI When Applying for Jobs Online to keep in mind while using AI tools in your quest for your first or your next step in your career journey:

  1. Tailor your resume with AI but keep it real. It’s always important to review and edit anything created by AI. Make sure the tool doesn’t embellish your resume or overstate your career goals and accomplishments. Ensure the resume sounds like you.
  2. Craft personalized cover letters. Let AI help you structure your letter, but make sure you inject your own motivations and stories into it. Be sure to mention specific achievements. Use the tools to give you the first draft, but don’t rely on them for the final letter.
  3. Use the AI tools for job matching and make sure to set up alerts to notify you when jobs are found. Use filters and searches for job classifications, locations, and other preferences within the tools.
  4. Make use of the AI tools for interview preparation and practice. Use feedback on tone, filler words and delivery to improve your clarity and confidence. Make sure the AI tool is using industry specific questions to help refine your responses.
  5. Stay authentic and ethical. AI is a tool, not a substitute for your personality and experience. Use AI to augment your voice, not replace it. Recruiters value authenticity.

For more information about using AI in your job search:

Roles, Goals, and Limitations – Prompting Your Large Language Model Leads to Better Responses

| 0

By Marc McCarty (assisted by several LLMs)

Ever since OpenAI burst on the scene in late 2022 with its free version of ChatGPT many of us have had a love-hate relationship with large language models (LLMs). On the one hand we have been blown away by the capabilities of these chatty, helpful, bots, but on the other we find they sometimes wildly misinterpret what we are asking, or produce results that, while they appear to be accurate and useful, bear no relationship to reality!

In an earlier post I explained how LLMs differ from traditional search engines like Google or Bing, and why this happens.

In this post, I’ll discuss an important tool you can use to improve your interactions with LLMs, and increase the likelihood they will provide relevant, accurate responses. You can do this by developing thoughtful habits that govern how you ask the LLM for assistance. This is called “prompting.”  

What Do We Mean by Prompting?

“Prompting” is just the term used to describe the instructions we provide to an LLM when we initiate an interaction.  Understanding the anatomy of a good prompt can appreciably increase your chance of getting an accurate and relevant information from the LLM.

I believe the three core characteristics of a good prompt can be summarized in three words —Roles, Goals, and Limitations.

Roles: Defining Who’s Who in the Conversation

Assigning clear roles in your prompt guides the LLM to adopt the right tone, expertise, and style when responding, and it can also help the model search out the best training data to use when formulating its response. Defining the roles involves two perspectives:

  • The LLM’s Role: Specify what persona or expertise you want the model to embody. For example, “Act as a financial advisor,” or “You are a high school science teacher.” This helps the model tailor its language, depth, and focus, producing responses that are contextually appropriate and that are aligned with your expectations.
  • The User’s Role: Describe your own background or needs. For example, if you are looking for ideas to plan a vacation, what are the ages and interests of the folks who will be making the trip? This information can help guide the LLM to areas of training more closely aligned to your objectives.

Goals: Clarifying the Desired Outcome

Every effective prompt is anchored in a clear goal. Before you ask, consider:

  • What do you want to achieve? Are you seeking a summary, a detailed explanation, a list of pros and cons, a creative story, or something else? Articulate your objectives explicitly.
  • What does success look like? Describe the ideal output format—should it be a bulleted list, a step-by-step guide, or a concise paragraph? Do you want the response to include specific hyperlinks to websites supporting the response? – Ask for it! Remember, the more specific you are, the more likely the LLM will align its response to fit your needs.

Limitations: Setting the Rules and Boundaries

Outlining the acceptable limits you want to impose on the LLM is important as well, and it can help control the quality and reliability of the output. Think in terms of how you would advise a “human” assistant who was doing research to provide your desired response (your “goals”).

For example, maybe it is very important that the LLM seek out the most current information available. If the model (like most do today) can perform an internet search to update its response, be sure to reinforce the need for it to do that when crafting an answer for you. If you want each assertion provided in the response to contain links to reliable internet-based source material; you can instruct the model to provide those.

It’s also a good idea to ask your LLM not to hallucinate responses. That might sound a bit odd, after if you are asking for advice on the best restaurant for Indian food in Little Rock, why would the LLM ever think you wanted it to make up a dining location to provide that experience? 

The answer is that sometimes users want a creative response, not necessarily an accurate “real world” example. Sometimes users want the LLM to hallucinate – to be “creative” and take existing data and extrapolate and arrange it in new format.  Asking the LLM not to do this (assuming you need factual responses), apparently does reduce the tendency the model has to guess or make up a source information source. Positive requests such as “provide sources for any conclusion that does not involve common knowledge” or “if you have inadequate data to state a conclusion, say so — do not speculate” can help.

You might also insist that the LLM take on your task one step at a time, proceeding to the next step after it has displayed results and received your permission to continue.  This can be particularly useful in helping to direct the model, so it doesn’t waste time and energy going down a path that isn’t relevant or useful.          

A Practical Example: Putting It All Together

Here’s an example of how you might structure a prompt using these three characteristics outlined above:

“You are an AI language model acting as a career counselor. I am a college student with no prior work experience, seeking advice on entering the tech industry. Please provide a step-by-step guide detailing how I can determine what options are available and how I can determine if it would be a good “fit” for me.  Consider what are the most relevant interests, character traits, course selection and life goals. Your response should include major bullet points and subcategories. It should be written in a conversational tone. You should include reputable sources for each step and avoid speculation—stick to verifiable information only.” Limit your response to approximately 750 words.”

  • Roles: AI as career counselor; user as college student.
  • Goals: Step-by-step guide, bullet points, reputable sources.
  • Limitations: No speculation, only verifiable information, citations required, limit length

Experiment, Practice, and Don’t Expect Perfection

We tend to forget that we are at the very earliest stages of learning how best to interact with LLMs. There are many experts out there with ideas about how to work with these cheerful bots. At the same time programmers at each of the companies that created LLMs are constantly working to improve them, to make them more intuitive, responsive and less prone to hallucination. 

For now the best strategy seems to be err on the side of providing LLMs too much, rather than too little, guidance and at least from my experience focusing on the three  characteristics of a good prompting: “roles, goals and limitations” is a simple, yet effective way to produce better, more accurate results from your interactions with LLMs.

However, “better” is not “perfect.” Even if you tell prompt an LLM not to hallucinate a response, it will still do so from time to time. Even if you ask it to provide you with authoritative references, you’ll from time to find it provides you broken website links, or a citation to a source that does not exist. For now, this is an issue we will have to accept. When accuracy is important – as it often is – you simply must check and in some cases double check an LLM’s response to make sure it isn’t “making stuff up.”

Yet, even with these trade-offs, I found that I use a LLMs more each day. While no LLM wrote this Post for me, it did shorten the time needed to spend working on it, and it offered several suggestions to improve on what I wrote. The net result, at least I believe, is a better product, produced in less time.

For me, that is not a bad trade off.

Happy Prompting!

Trust But Verify — How LLMs Differ from Search Engines—and Why They Sometimes Hallucinate

| 0

Large Language Models (LLMs) like ChatGPT and traditional search engines such as Google may both help you find information, but they work in fundamentally different ways. Understanding these differences is key to using LLMs effectively—and to recognizing why LLMs sometimes “hallucinate,” or make up information.

How a Search Engine Works

A traditional search engine such as Google or Microsoft Edge is essentially a giant, constantly updated database of web pages. When you enter a query, the search engine:

  • Scans its index of the web for pages containing your keywords.
  • Ranks those pages by relevance, popularity, and other factors.
  • Shows you a list of links to real, existing web pages for you to explore.

Search engines don’t generate new information—they retrieve what already exists, helping you find the most relevant sources for your query. It’s possible, of course that the information retrieved may be inaccurate, or that your search request may have been incomplete and missed some important information, but a “search engine” is just that, an engine designed to seek out and retrieve information contained somewhere on the internet.

How an LLM Works

An LLM, by contrast, is trained on huge amounts of text data (books, articles, websites) to learn the patterns and relationships between words and ideas. When you ask it a question or request (this is called a “prompt”), an LLM:

  • Analyzes your prompt to understand context and intent.
  • Predicts and generates a sequence of words that best fits your request, based on its training data and learned patterns.

Initially LLMs did not look up answers live on the web (although many now use your prompt to generate an internet search and adds that information to its database). But even in this case the response is generated by “guessing” the most likely next word in a sentence, drawing on what the model was exposed to during training.

Why LLMs Hallucinate

Because LLMs generate text based on statistical patterns—not by checking facts—they can sometimes produce information that sounds plausible, but isn’t true. This is called “hallucination.” For example, if asked about a recent event that happened after its last training update, the LLM might invent details, unless it is able to access real-time data.

Hallucinations can also happen if:

  • The model’s training data was incomplete or contained errors.
  • The prompt is unclear or ambiguous.
  • The model tries to fill in gaps with its best “guess,” even if there’s no factual basis.

Remember, LLMs have no concept of facts/knowledge and just learn statistical patterns of how likely it is one word follows another (given a context).

LLMs with Web Search: A Step Forward (But Not Perfect)

Newer LLMs can now perform real-time web searches to supplement their responses. When you ask a question, these LLMs:

  • Runs an internet search to find relevant, up-to-date information.
  • Incorporates this information into its answer, often providing citations or links.

This hybrid approach—sometimes called Retrieval-Augmented Generation (RAG)—helps the LLM provide more accurate and current answers, especially for recent events or facts outside its training data.

However, even with web search capabilities, LLMs can still hallucinate:

  • They might misinterpret search results or combine information incorrectly.
  • They can still generate plausible sounding but false statements if the search doesn’t return clear or accurate data.
  • The process of merging search results with generated text can introduce new errors or fabrications.

Key Takeaways for Users

  • Search engines retrieve: They show you real, existing web pages that match your keywords.
  • LLMs generate: They create new text based on patterns in their training data, not by looking up facts in real time.
  • LLMs can hallucinate: Because they generate responses, they sometimes make up information that sounds real but isn’t.
  • Web search integration helps: LLMs that use live web search can provide more current, accurate information—but they’re still not perfect and can hallucinate.

Understanding these differences will help you use both tools wisely—and always double-check important facts, especially when using LLMs for critical information.