A few human-generated guidelines for using generative AI

Well, hello, human friends! Let’s dive into this post assuming you're already contending with the disruption of generative AI about the potential for tsunami-like disruption of generative AI for knowledge workers worldwide. Marketing and communication agencies everywhere must quickly get comfortable, competent, and creative with generative AI.

It’s smart to use it, and dumb to trust it (as it stands today).

At Nimble Works, we focus on the intersection of high tech and high science – and AI is a core offering for many of our clients. We are integrating more generative AI into our methods but want to do so adventurously AND responsibly. We want it to augment and accelerate our human talents, but we know we can’t yet trust it to produce content ethically and reliably.

As few official or even legal guardrails exist around the use of generative AI, we've established a short list of principles that were also recently featured in PM360, which may evolve, to help our team navigate the use of artificial intelligence with human intelligence and integrity – for the benefit of our customers and our audiences. We hope you’ll find them helpful as well.

Nimble Works’ guiding principles for using generative AI (as of March 2023)

  1. We commit to transparency about when we are using generative AI.
    Given the speed at which the field is evolving, it’s hard to know where we may be headed or what new challenges we may encounter with this technology. By sharing how and when we use generative AI–with our team members and clients – we’ll help each other get smarter and avoid potential pitfalls.

  2. Humans define the problems; generative AI may help solve them.
    The greatest successes in marketing and communications come from asking the right questions, which may differ significantly from the ones initially posed in a brief. Healthy skepticism and innate curiosity are intrinsically human and richly productive. Empathetic, human-centered direction also helps to prevent the propagation of bias. But generative AI can help accelerate research or divergent thinking, for instance, so that we get to solutions more efficiently. 

  3. Humans create original model content; generative AI may derive ancillary variations.
    Given unresolved questions about intellectual property, we believe all core content should result from original human creation. This will help prevent accidental infringement. Once core content has been created, however, we believe it can be smart to use it to inform the crafting by generative AI of short-form derivative variations, such as social media posts or campaign emails. 

  4. Generative AI is not an authority.
    Generative AI’s knowledge base is considered an amalgam and one that is a black box. In healthcare, using primary research and being transparent with credible sources is imperative. In other words, not all accurate facts are acceptable. They must be not only accurate but also credible, primary, peer-reviewed, etc.

No content is created without human oversight; no content is final without human validation.

This is the most necessary of all our principles. Generative AI is not to be trusted for factual accuracy, unbiased content, or empathy. The leash must be short and vigilance constant.


Generative AI comes with jaw-dropping strengths and grave weaknesses.

The current consensus around using generative AI for agency activities is that:

  • It is great, given the proper prompts, at: 

    • Summarizing research

    • Analyzing data

    • Producing content for a wide range of needs in a wide range of styles

    • Generating ideas (e.g., for headlines or interview questions)

    • Editing for grammar and concision

    • And much more

  • It is NOT so great at

    • Being reliably accurate

    • Providing sources

    • “Behaving” ethically (see this NY Times article about a journalist’s creepy interaction with Bing’s chatbot, for instance)

  • AND it presents unchartered legal risks, particularly in intellectual property infringement. In ChatGPTs own words: “Generative AI can create content that may infringe on existing copyrights, trademarks, or patents. For example, if the AI-generated content is too similar to an existing copyrighted work, it may be considered an infringement.”

Previous
Previous

HIMSS23—balancing AI fever with equity and patient-centric conversations

Next
Next

Reasons to feel good (and even excited) in 2023 and beyond