Quinn McKenna on LinkedIn: How are you using AI now? Notice the question is NOT “are you using AI?”… (2024)

This post is unavailable.

Join now Sign in

More Relevant Posts

  • Moritz Kremb

    Helping you leverage AI to grow your business

    • Report this post

    Any business sleeping on AI will fall massively behind.A new study shows that AI-leveraged workers complete tasks 25.1% quicker and with 40% higher quality.If you're a CEO, business leader or manager, start acting now.Here are 8 simple strategies how to implement AI into your business:1/ Create a prompt library→ Create a prompt library that everyone in the company has access to→ Create a process how your teams can edit and iterate on prompts→ Create resuable snippets that can be copied into your prompts to feed in context (such as a product or persona description)2/ Appoint an AI officer→ Find the most AI enthusiastic or knowledgeable team member and appoint them as the your AI officer→ Their job will be to raise awareness of AI within the company3/ Set up an AI slack channel→ Give your team the chance to share AI developments, tools and advice with other colleagues→ Create an environment for your team to discuss ideas they have around AI4/ Drive a MVP project→ It can be something as simple as implementing ChatGPT into your marketing team's blog-writing process→ Another idea is to embed an AI chatbot on your website. There are plenty of tools that let you do this in seconds.→ It's ok to start small!5/ Workshops→ Conduct a ChatGPT workshop→ Find a team member to do it or hire someone externally6/ Buy all team members a ChatGPT or Claude Plus membership→ Probably the highest ROI you will ever get, since your AI outputs will be higher quality→ Several companies have done this and are already seeing results7/ Set ChatGPT or Claude as everyone's home page in Chrome→ This increases the likelihood your team members will use AI and helps them to create a habit→ In Chrome go to Settings > Appearance > Show Home Button and set as ChatGPT or Claude's URL8/ Turn on privacy mode in ChatGPT→ Many companies are concerned about privacy issues but don't know that ChatGPT has privacy control features→ In Settings, toggle off "Chat history & training" to prevent OpenAI from using your data for trainingYou can find more details about the study mentioned above here:https://lnkd.in/eJB_fqnW---Thanks for reading. If you enjoyed this, you'll love my weekly newsletter.I help you grow your business by leveraging the latest AI tools.Join 13,000+ subscribers here:https://lnkd.in/e78nZ7Wj

    • Quinn McKenna on LinkedIn: How are you using AI now?Notice the question is NOT “are you using AI?”… (2)

    206

    38 Comments

    Like Comment

    To view or add a comment, sign in

  • Dr. Ronald Wichern

    Chief Quality Officer at IBA SE

    • Report this post

    Hi Georg Digel great contribution! Thanks for taking the effort. You are guiding the whole CAPA community with it. What I learned from it: - The prompt is much more detailed, that I would have expected it. However, once established, most of it could be reused.- The proplem description might require some reformulation to get the best results.- While the results are quickly available and bring up new perspectives, they have to be evaluated cautiously for every single CAPA.- The effort addresses „only“ one step of the CAPA process yet. But extension is only a question time. Georg Digel Do you dare to try the next step as well?- Thinking forward, the prompts might be provided as GPT Chatbot. What you have demonstrated here for CAPA, could also be useful for analysis steps at NC or complaints. Audit reports might be checked for completeness. Maybe other readers might be encouraged to try it for other fields. (If this work has already been started, share links accordingly.)Best regards,Ronald

    4

    2 Comments

    Like Comment

    To view or add a comment, sign in

    • Report this post

    Why AI won’t replace me …… but someone who knows how to use it will. ;)My FOMO grows as my LinkedIn feed fills up with AI content.FOMO, or Fear of Missing Out, is that feeling of being left out when something exciting is happening without you. Everyone is:1. posting the same cheat sheet, 2. explaining “prompt frameworks” and3. doing EVERYTHING in ChatGPT, Canva or Midjourney nowadays.So, I gave in to peer pressure and started exploring AI myself. As most of my content is about Corrective and Preventive Action Management (CAPA), I obviously try to tailor my information to this specific use case. A while ago, I posted about how proper problem statements can prevent headaches later in the CAPA process. I also tried to get some CAPA specific information out of ChatGPT.Now, I used ChatGPT again to create audit-proof problem statements, and I think I did a better job than last time!To be honest - I don’t believe AI is quite there yet to use it without questioning, especially in a highly regulated industry like medical devices. Patient safety and device efficacy are crucial, requiring our highest attention. So let's be cautious with the tools we use for decision making.However, I am certain that one day someone will figure out exactly how to use it.By this, the person will maybe 10x or 100x their output. Skills like data analysis, risk evaluation, or quality documentation might become outdated as AI could perform them much more effectively. That's the reason I think it's important for us to learn new skills ... to move from creating content to reviewing it. This means, instead of writing things like problem statements, we should focus on checking if they accurately describe the issue. Here is where the added value might happen … and not in the actual “writing” part.To give a different example:As a proud manual shift driver in Germany, I wonder about the relevance of this skill in 10 or 20 years when cars may drive themselves.Now let's jump into AI and CAPA!Let's review what I did differently (hint: I used a muuuch longer prompt) and share my learnings. You can try yourself – just copy & paste the prompt below.DISCLAIMER:- Do not use sensitive information or share any confidential data when using open AI models like ChatGPT.- Take the output with a grain of salt – refine and update to match your specific needs.- Let a seasoned quality expert in your company review if you want to use the generated problem statement.____________________________If you liked this post, follow for weekly CAPA content. Georg DigelHere is the prompt (I used ChatGPT 4):It is too long to post it here, so please look into the comment section.

    17

    7 Comments

    Like Comment

    To view or add a comment, sign in

  • Nikita Safronov

    CIO and IT Leader Strategic Partner - Financial Sector

    • Report this post

    No matter where you look, the data paints a single picture: If you sleep on the practical benefits of AI, your IT strategy might as well rest in peace. #gartnerit #ai #generativeai

    10

    1 Comment

    Like Comment

    To view or add a comment, sign in

  • Clwyd Probert

    CEO & Founder, AI Consultant at Whitehat Inbound Marketing Agency (Diamond HubSpot Partner)

    • Report this post

    Cracking insights, Moritz! You've got the right idea—AI isn't some futuristic pipe dream; it's here and it's now. At Whitehat, we're all about combining technology with strategy. We believe that simply adopting AI without a solid game plan is like putting the cart before the horse. Your tips are a great place to start for any business looking to integrate AI meaningfully.Your point about appointing an AI officer stands out. Do you think this role should be a separate position, or could it be a part-time responsibility for someone already on the team?

    1

    Like Comment

    To view or add a comment, sign in

  • 🍣 Rob Estreitinho

    Founder & Head of Strategy, Salmon Labs.

    • Report this post

    I've been using Claude AI a lot.The monthly paid subscription is worth it.(Btw, Claude >>> ChatGPT on almost all levels, don't @ me.)It genuinely helps me speed up my process and get to sharper, more specific questions and answers i go.Three particular use cases that are working for me right now.1/ Audience research.Imagine you need to think about a hard-to-research audience.One simple trick: find a few PDFs that capture some of that audience's mindset (can be articles, book extracts, etc).Then load them into Claude, ask for it to think like the person reflected in these PDFs, and then ask questions.It ain't synthetic user material, but it's damn close enough.Primin' precedes promptin'.2/ Thought starters.This one's a classic, but i recently tried it with a twist.I gave Claude a series of cultural references that were in a brief, and asked it to riff on some ideas based on them.And you know what, it worked out pretty well.Don't just ask for thought starters, ask for thought starters based on the types of references that fit a particular task.(And if you avoid using advertising references, it probably takes you to even more useful places.)3/ Audience responses.A logical next step from either thought starters or specific ideas you're working with.This one's simple: describe the idea, and then ask Claude what a specific audience might take away from it.It's remarkable how often we forget this, and just worry if an idea is good or not.A far more useful starting point: if an idea works as the brief intended or not.So there you go, three simple use cases that turn Claude AI into a pretty handy strategy buddy.LLMs can't do your full job for you, but they can sure help you do parts of your job 10x better.Ps. Check the comments section for more goodies, yeah?

    81

    3 Comments

    Like Comment

    To view or add a comment, sign in

  • Seth Hardy

    Let's Build the Future of Insights

    • Report this post

    Over the course of many conversations, I have refined the way I explain how AI works, at least in an Insights context. The phrase that seems to resonate most with audiences is that AI "scales codified expertise." To understand this, let's work backwards through that phrase. Expertise:By this I simply mean domain knowledge. The way to unlock true value from AI tools is to marry a base model with domain-specific knowledge related to performing a specific task. This is why one of the most effective prompts to use with ChatGPT is "Act as a..." or similar. Codified: I'm tempted to just define this as "organized" but it really means something a bit more formal- let's say "arranged according to a plan." If you think of AI like a person, this becomes clear. Imagine a scenario where three people hire the same contractor to build a house:-The first person simply says, "Build me a house." -The second person provides a general description of what they want the house to look like. They say something like, "Build me a 3 bedroom house with an attached garage and a deck."-The third person provides a detailed description as well as clear standards they would like followed. Further, they document their instructions with pictures and even layouts of a few example houses that conform to their ideal standards. Who will end up being happier with their house?Scale:When people talk about scale in a business context, they are generally referring to revenue growth and, more specifically, growing revenue at a pace that outpaces the rate of costs needed to produce it. In this context, I mean increasing data outputs at a rate that exceeds data inputs. For example, if I train an AI app to produce survey drafts to my specifications, I will need to put some amount of effort into telling the app how I want it to work and checking to make sure it is providing the output I want (pretty much like training a person). Once I have the app working the way I want, I can now use it to produce as many surveys as I want, without having to invest the time and effort (i.e., cost) in training it over and over. So, I can produce a theoretically infinite number of surveys for the relatively marginal "cost" of providing it with the details of the specific project I'm working on. Hopefully this is a helpful perspective for Insights colleagues. P.S. Let me know if you view it differently. I would be happy to discuss.

    12

    Like Comment

    To view or add a comment, sign in

  • Tim Dasey, Ph.D.

    Education for an AI world ~ Keynote speaker ~ AI Strategy and policy ~ Curriculum Development ~ Professional Dev. ~ Educational Gaming ~ Author

    • Report this post

    Articles like this really bother me. Kevin Roose's NYT articles and Hard Fork podcast are quite informative and balanced. But this article from another reporter there has misleading information. "I don't doubt that A.I. will eventually be a big deal" - Huh? It's not now? "Our editor recently asked us to list impressive things people were doing with ChatGPT, and we really had to think about it." They then list best uses of chatbots are first-draft writing, a dentist who uses it for emails and a friend producing a meal plan, offset by emphasizing AI hallucinations and Michael Cohen's AI-generated legal brief? - Had the reporter asked actual AI experts instead of fellow reporters, I expect the most impactful use mentioned would be coding. Software that can be developed in a fraction of the time is a big deal. As for writing...just as for coding...humans need to curate most uses, and getting good output from AI requires skill most who dabble with it do not have. "The worriers fret that because the systems learn from more data than any human could consume, they could wreak havoc as they are woven into stock markets, military systems, ad other vital systems." - Both parts of the sentence are correct, but the conclusion doesn't follow from the premise. It's not that the AI learns from more data; it's that what it learns may or may not be aligned with human interests."But all the talk of these hypothetical risks can reduce the focus on more realistic problems." (then they describe propaganda risk) - Look, can't we walk and chew gum at the same time? There are already realized risks like propaganda. But that doesn't mean there aren't giant issues with controlling machines that get smarter than people at everything. Most in AI believe that is coming quite soon, so the risks will be too. This article dismisses those concerns, but they are quite real (and not all about "fears that A.I. will begin killing people")."Regulators need to educate themselves from a broad range of experts, not just big tech." - Sure, listen to AI experts outside of big tech. Probably wise given profit motives. But the context of this sentence is whether "big societal problems" are a concern, and that means understanding where AI is going. Only someone who understands the tech and ongoing research can speak to that. AI experts are speaking out about their own creation. Pay attention. When you want to understand medical or legal issues, talk to a doctor or lawyer. When you want to understand AI, you must talk to AI experts."Regulators need to understand, for instance, that the threat to humanity is overblown, but other threats are not." - We don't know the threat level to humanity. We're entering uncharted territory. But the fact that the best AI experts are concerned should make everyone sit up and pay attention.#ai #aicommunity #nyt #education

    A.I. Questions, Answered https://www.nytimes.com

    2

    Like Comment

    To view or add a comment, sign in

  • Rafał Bielski

    New Media, AI and Automation Expert | Consultant | Digital Transformation for Companies Worldwide

    • Report this post

    In my conversations with both business professionals and private individuals, I've encountered a wide range of opinions about the AI revolution we're witnessing. Some people claim that it doesn't work, others are indifferent as they have more pressing matters than worrying about AI, some fear it because they don't understand it, and then there are those who recognize how the world is changing before our eyes and see the potential of AI. However, all these groups share one common trait: fear.For some, this fear manifests as trivializing the AI revolution. For others, it's a general fear of change, fear of the unknown, or fear of losing their livelihoods.Is this fear rational?Let's start by addressing whether AI truly works. Beyond generating varying quality of text with ChatGPT or creating impressive visuals and videos, can we use these tools in a meaningful way?From my experience, the answer is a resounding yes.Most users interact with AI through limited user interfaces (chatbots, prompt windows), resulting in a prompt-response cycle that handles single tasks such as writing text, generating images, or creating music. While this already offers significant capabilities, it requires the right approach and knowledge to achieve high-quality results. Many users stop at this stage, but the true utility of AI begins with AI agents capable of performing multiple tasks simultaneously.Imagine you want to create a social media video and you have an AI agent that generates the video based on your prompt. The agent could take user input, draft a script using ChatGPT, generate image prompts and corresponding images using Mid Journey, animate those images using , compile them into a video, add voice over from ElevenLbs, and finally publish the post on social media at your specified time. This process can be repeated indefinitely. While the quality of the generated content is still up for debate, the rapid progress in this field is undeniable. I'm currently working on such a system, and it proves highly useful for specific applications like informational videos or product promotions.Another example is the XQuiz Pro system, which I developed in about 100 working days with the help of AI. In this case, AI assisted in developing code modules, generating graphic elements, and creating text content. By leveraging small AI agents performing multiple tasks simultaneously, I was able to accomplish multidisciplinary tasks that I previously had no expertise in.After two years of intensive AI use, I can confidently say that there are areas where properly applied AI works remarkably well. Our fears, regardless of how they manifest, are justified. A significant change is coming that will impact approximately 90% of the global economy. Therefore, it's crucial to stay informed and keep up with these trends to avoid being caught off guard.What are your thoughts on this topic? What are your concerns?

    • Quinn McKenna on LinkedIn: How are you using AI now?Notice the question is NOT “are you using AI?”… (29)

    3

    Like Comment

    To view or add a comment, sign in

  • Heather Noggle

    Integrator of Tech and Human Effort | Top Writing Voice | Process and Cybersecurity | Writer | Data Integration | SMB Advocate | Systems Thinker and Innovator | Analogy Queen | Technical Brand Strategy

    • Report this post

    Failed again! Curious why this task is so difficult for AI.Both ChatGPT 4.0 and Google's Gemini-updated Bard failed today's test.It's the Noggle poem test I run every month or two. Try it yourself; the prompt (defined by me, and refined by ChatGPT 4 for what it calls the best chance of success). -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-Please write a Noggle poem. Noggle poems follow a structure.First Line: This line should have exactly 4 words. It sets the theme and introduces the rhyme scheme.Second Line: This line should have 3 words, and its final word should rhyme with the last word of the first line.Third Line: This line should contain 2 words, with the final word rhyming with the ending word of the first line.Fourth Line: This is a single word that rhymes with the last words of the previous lines.-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-So where do you think it goofs consistently? It's always word count per line. (It's a strong rhymer)And, when you ask it to validate a Noggle poem specifically on word count, it will miscount often. It'll show you the line and then count the words wrong. I find it important to highlight simple things AI struggles with. In the news lately we've seen that confidential data's been leaked into public search results (Google's Bard, Google search results)It can leak all the Noggle poems it wants, but sometimes users correspond with AI as though it were a person. Or, they feed private data into the AI and ask it for analysis. Two risks there:1) Private data is fed in to train the AI alongside less sensitive data.2) Exposure of that private data.Generative AI isn't a private thing, so keep that in mind when you're interacting with it. Still in its infancy. Remove personally identifying information from data you give it. As the Noggle poem failures show. Generative AI's (when it's generating) is always thinking about its next word. Only its next word.Now, if you can refine the prompt to where it works consistently and proves me a liar, I'll figure out some sort of accolade for you. DM me the prompt, and we'll talk. Would love your thoughts. #generativeAI #bard #chatgpt4 #cybersecurityawareness

    • Quinn McKenna on LinkedIn: How are you using AI now?Notice the question is NOT “are you using AI?”… (32)

    31

    37 Comments

    Like Comment

    To view or add a comment, sign in

Quinn McKenna on LinkedIn: How are you using AI now?Notice the question is NOT “are you using AI?”… (36)

Quinn McKenna on LinkedIn: How are you using AI now?Notice the question is NOT “are you using AI?”… (37)

2,023 followers

  • 871 Posts
  • 6 Articles

View Profile

Follow

More from this author

  • 4 Millennial Restaurant Tech Requirements Quinn McKenna 8y
  • Do you Know? Three Free Solutions-might save your business Quinn McKenna 8y
  • Wanna see the big picture? Quinn McKenna 8y

Explore topics

  • Sales
  • Marketing
  • Business Administration
  • HR Management
  • Content Management
  • Engineering
  • Soft Skills
  • See All
Quinn McKenna on LinkedIn: How are you using AI now?

Notice the question is NOT “are you using AI?”… (2024)
Top Articles
Latest Posts
Article information

Author: Lakeisha Bayer VM

Last Updated:

Views: 5920

Rating: 4.9 / 5 (49 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Lakeisha Bayer VM

Birthday: 1997-10-17

Address: Suite 835 34136 Adrian Mountains, Floydton, UT 81036

Phone: +3571527672278

Job: Manufacturing Agent

Hobby: Skimboarding, Photography, Roller skating, Knife making, Paintball, Embroidery, Gunsmithing

Introduction: My name is Lakeisha Bayer VM, I am a brainy, kind, enchanting, healthy, lovely, clean, witty person who loves writing and wants to share my knowledge and understanding with you.