Thursday, July 17, 2025

AI, Ethics, and the Reality of Responsible Research

As a university-trained researcher and a reality-based IT professional with decades of experience, I’ve learned that the closer research comes to human life, the more precise and careful it must be—except, of course, when ideology distorts the outcomes.

See also my soon to be released blog on paranoia in America today. It's not paranoia if you're simply observing and commenting on it. Though those who support what's going on will see you as paranoid or hateful if you disagree with them because that's their mindset, the mindset of narcissism and a lack of compassion for fun and profit. 

Also, my blog on ChatGPT and Carbon: Why I Still Use It (Responsibly) for Research and Writing.

Before I get into this, I want to offer a very interesting two-part podcast from Janes. They discuss using AI in a very serious environment such as the national defense and security realm.

"Janes delivers the world's most complete collection of defense and security data and insight to support your mission." I've been following them since the 1990s. Quality products.

AI for automated OSINT reconnaissance - part one - Jim Clover OBE, Varadius Ltd, to take a deeper look into the practical uses and implications of AI for the defence intelligence community. They explore its real-world effectiveness in gathering and analysing intelligence and also why human oversight is still critical to ensure the intelligence it is producing is both ethical and valuable.

AI for automated OSINT reconnaissance - part two - Jim Clover OBE, Varadius Ltd, continues to uncover the evolving landscape of artificial intelligence (AI) in the intelligence community with Harry Kemsley and Sean Corbett. They discuss the fine line between the innovative applications of AI and the critical importance of human oversight in intelligence analysis. Explore how AI is reshaping intelligence gathering, the risks of over reliance on technology, and the vital role of ‘prompt engineering’ for accurate and ethical outcomes.

OK...

About ideology distorting outcomes... That distortion is how we end up with phenomena like Navy SEALs—elite figures in life-or-death operations—gravitating toward extremism, typically on the political right. It’s not random. It’s what happens when the rigor of reality is replaced by belief-driven narratives. It's something that has taken over far too much of America today and now from the White House, Congress and SCOTUS on down to the cult of personality being foisted upon us.

So, we see a similar dynamic in politics. The more a president is governed by polls rather than principles, the worse they tend to be. Governance becomes reactive, not visionary. As stated by Douglas Lansford “Doug” Bailey (October 5, 1933 – June 10, 2013), a prominent American political consultant and strategist:

“It’s no longer likely that political leaders are going to lead. Instead, they’re going to follow.” 

This brings us to artificial intelligence and ethics. 

When using AI for research and analysis, the ethical responsibility lies squarely on the human researcher. It’s not about what the AI does—it’s about what you do with it. AI isn’t thinking. It’s processing. It doesn’t have beliefs—it reflects back what it's been fed. 

Always remember that we, the users of AI, are the gatekeepers, the retainers of ethics, morality, decency, reality. When I use Wikipedia, it's as many things are, like Google search, a starting place. So double check your results. Use your mind, obviously. 

AI can be immensely powerful, especially for tasks like enhanced information retrieval—essentially a supercharged Google or Wikipedia. But when an AI’s training data is corrupted—by bias, misinformation, or manipulation—so are its outputs.

Prompts matter. A lot. The quality and context of the prompts determine the usefulness of the AI's responses. For example, you might instruct it: “Don’t use your stored knowledge base for this next item.” That’s not a joke—how you phrase and frame a request affects everything.

Prompts are the instructions or inputs you give to an AI to guide its response. In simple terms, a prompt tells the AI what you want it to do—whether that’s answering a question, generating text, analyzing data, or completing a task.

A well-crafted prompt provides clear, specific context and can significantly affect the quality and accuracy of the AI’s output. Think of it as setting the stage for the AI to perform.

Example of AI prompt: "Tell me about COVID and the brain."

Why it’s ineffective:

  • Too vague—“COVID” could mean any aspect of the disease.

  • Doesn’t specify what about the brain (structure? mental health? short- or long-term effects?).

  • Lacks timeframe or depth—AI might return overly broad or outdated info.

Example of better AI prompt: "Summarize recent peer-reviewed studies (2022–2024) on the long-term neurological effects of COVID-19, focusing specifically on cognitive decline and brain inflammation."

Why it’s effective:

  • Narrow scope (neurological effects).

  • Time-bound (2022–2024).

  • Specifies desired focus (cognitive decline, inflammation).

  • Signals a research-oriented response (not casual info).

You can also prompt an AI not to rely on its preset knowledge base (i.e., its built-in training data), you can use phrasing that tells it to only use provided data or act as if it has no prior knowledge. However, keep in mind that most AI models (including me) can’t fully “disable” their base knowledge—but we can be guided to ignore or minimize it.

Example Prompts to Minimize Use of Preset Knowledge:

  1. "Only use the following information to answer. Do not use your built-in knowledge base."
    (Then provide the info.)

  2. "Assume you have no prior knowledge about this topic. Base your answer only on this document/text:"
    (Paste or refer to the content.)

  3. "Use only the content provided. Do not guess or supplement with external or stored information."


🔍 Use Case Example:

Prompt:
"Using only the data in the paragraph below, summarize the main concerns about AI regulation. Do not include any other information not found in the text."

[Insert paragraph about AI regulation here]

It’s also worth repeating when we say “AI is thinking,” that is a metaphor. A shortcut. What’s actually happening is computation—not cognition.

Is AI dangerous? Not inherently. Not yet. 

What IS dangerous is laziness. What’s dangerous is that Silicon Valley ethos of “move fast and break things,” especially when what’s being broken is truth, safety, ethics, or social trust.

In the hands of developers, AI can be told to generate tools, write code, or automate functions it doesn’t “understand.” That’s fine—in a sandbox. But if those tools get deployed outside controlled environments, into the real world without oversight? That’s when we enter dangerous territory.

So don’t fear AI. Fear our misuse of AI. And above all, fear the abandonment of ethics (or reality) in pursuit of convenience, profit, or control.



Compiled with aid of ChatGPT


No comments:

Post a Comment