Pseudo Channel Services

Readers might be surprised to learn that I consider one of the problems of using AI to provide a security service is in fact the AI itself. The AI is either constantly interacting with external resources such as data centres; conversely, if operates entirely locally, the likelihood of its giant footprint extending far and wide on the internet seems significant. This is to say, external parties can infer what the AI is doing: “Hi, I need some background information on xxxxxxx.” So it is difficult to control how the AI goes about its business. But the biggest threat perhaps is when confidential company data is shared with the AI, which may subsequently store or compile the data at external data centres.

Generally speaking, when using AI, I don’t make use real details of specific personnel, vendors, IP addresses, domains, and of course passwords. I use the AI strictly for its processing capabilities. For all the AI knows, I am using entirely hypothetical or virtual data. In this way, if its data centre, owner, designer, or technicians take a special interest in the company, they would only have pseudo details. So the first rule is to pseudo-convert any data such that it cannot be traced back to the actual company by the AI. Of course, the AI can take a wild guess. Unless the company itself has already made its data public, there should always be a layer of doubt that prevents the AI from making convincing assertions: e.g. if its pseudo data appears on an external database, a human should have considerable difficulty pinpointing exact identities.

The use of AI comes in when I assemble company data in quadradoc format, which AIs appear to have little difficulty understanding. The quadradocs are generally meant to hold pseudo organizational details – often quite diverse due to my holistic or ecosystemic lens. Quadradocs can contain information about personnel, training, network devices, known or suspected hacker attacks, and all the quotidian events that tend to form the foundation of major developments. The AI might be asked, “Based on this scenario, can you explain how xxxxxxx might be vulnerable to a social engineering attack?” I have never met an AI that doesn’t like giving explanations. So it will do so. This explanation can itself be recorded on a quadradoc. Then, I must point out, although the AI’s response are recorded, I usually give the response some thought and add my own comments. AIs have a great deal of knowledge. But they aren’t sneaky or cunning, the way millions of years of survival as a species have made humans.

In terms of the “channel” aspect of the service, there is a question of how I might become aware of events at the company. There are a few ways. The easiest is just to include me in the CC. I am familiar with the heavy use of email in most organizations. I would only want CCs from particular heads. Before this “data” is forwarded to the AI, it would be pseudo-converted in bulk using, yes, a pseudo-conversion program. The AI would not be involved in reading the non-converted data. Another approach, if the company really wants to be safe, is to email me only pseudo-converted details in bulk. In this scenario, the company takes on the responsibility of keeping track of everyone: e.g. “Linus suggested that we . . .” (Make certain the company knows who Linus is since I would have no pseudo-conversion details.)