Beware of the ways in which AI can be used to cause us directly or indirectly.
Getty
We all know: AI is no longer just future technology. We can register for an account in a certain number of places online and ask a virtual friend everything we want.
What is the disadvantage? In fact, it’s a lot.
There are a ton of different ways whose companies can use AI to directly or indirectly cause damage to the United States. Now that does not mean that we should stop using technology, but we have to keep our eyes and ears open so that we know what’s going on. And to identify it, we must know what it is – so let’s start there.
What is invasive AI?
Invasive AI is when AI is used to harm, control, manipulate, manipulate or otherwise directly our perceptions. Sometimes it is done in a mischievously, as when a company intentionally uses AI to create and Automate cybersecurity attacks With a level of efficiency that a human has never been able to. Other times are more insidious and its ways to handle your purchasing techniques thanks to marketing.
No matter how you cut it, you probably don’t want the AI to fold your opinions in one way or another, and you certainly do not want it to do malicious things or with your computers and / or your data. And in the rush to the market towards which everyone is heading, it seems that we do not really look at the ethical concerns with the product.
Isn’t that what 50 years of novels and science fiction films were supposed to teach us?
Let’s create a meter – narrative
I have been working on AI for years now, and ethical concerns are very at the forefront of our efforts. In fact, we consider AI as a tool of collaboration which is not used to harm but which helps us instead.
We can be more effective, learn more and perhaps even increase our creativity. Everyone talks about people supporting technology right now, but we think we should turn this: technology can support people. We just have to put the right railings to do so.
One way to do so is by confidentiality. We can share data without identifying ourselves, allowing us to learn and solve problems faster.
For example, if we could put all the global data of the supply chain in an anonymous database that is introduced in AI, imagine the information that we could withdraw from it. The routes would become more effective – and we would save more money on fuel and supplies, would avoid Prices that affect the supply chainand communicate better.
Looks like a victory / victory, right?
The corrective
So … how do we solve this problem?
I think the first step is that people meet. Go up as the American ancestors did a few hundred years ago. People behind the reader of AI seem to advance without regard to what is on their way, and I think we have to make sure they know what we think.
Tell them that we do not defend AI unless it is used in a ethical and responsible manner. This is important for our whole future, after all.
Second, we have to establish a secure communication channel for those who want to resist when AI is used against us. This channel should also be the place where we start our revolution of IA ecosystems. This channel should be the place where we can organize the denunciation, collaboration and defense groups we need.
Then, we can start talking about the fixes, such as the way we can allow companies to make sets of authorized shared data, where they can take advantage of data in collaboration without compromising their confidentiality or their security. Tools like this allow us to obtain the advantages of AI without malicious intention. It is a much better solution.
But what we can’t do is nothing. We cannot just remain lazy while we are walked everywhere by IA companies without any respect for our privacy, our needs or our concerns. We must defend ourselves and make sure they hear our voices.
I am ready to direct the charge, and a way I do it is with my new book, The AI ecosystems revolution, Which, when I write this, is available for pre -order on Amazon before its release date on April 29, 2025.