Testing AI for gaming CI: A human expert's perspective

I’ve been doing gaming CI for years and wondered what AI can do better to replace me. So I added a new layer to my latest rabbit hole - Disney IP games. In light of Disney spreading its IP licensing across mobile gaming, I went on a tour of both old and new titles. After collecting my insights, I asked Gemini and GPT to do the same. For the test, I tried three prompts: 1. Basic — asking for an analysis of three games, with specific attention to IP integration. 2. Defining the persona for the job, what its task is in detail, and providing an example of a human workflow. 3. Adding to V2 some more thinking guides, for example: “Marketing materials show things that are not in the game, but communicate something important to the user. We can learn a lot about what users are attracted to from the ads that work vs. ads that fail.” The results: Suspiciously similar insights. The insights from both AI models lacked the depth a human expert would provide. As a former journalist, I'm trained to see a consensus between sources as a sign of truth. But with LLMs, it seems to indicate a shared limitation. My conclusion: - The non-existing Generalization ability is very much missing here. - If you’re an insight generator at your core, a good prompt and a few edits will give you a great head start. - The path to AI-generated market research is complex, but I’ll keep looking.

  • diagram

👑 Good news for us Humans 😊

To view or add a comment, sign in

Explore content categories