This article is solely an opinion being expressed by the author and does not reflect the opinions of The H.A.C.K.E.R. Project as a whole.

Artificial intelligence (AI) is a common buzzword in the world of technology today. As of now, AI is an extremely useful tool that has a wide variety of applications, from being a personal companion, to generating images, to even piloting motorized vehicles. One particular point of interest surrounding AI- both in the real world and in many fictional ones- is whether or not an AI should be given the same (or similar) rights as a human being. After all, some AI models are able to operate on a human-like level.

As with any debatable topic, there are arguments for both sides of this question. For some, AI should be given human rights if they are basically the same as a human being. For others, AI will never be “human”, so they should not have human rights, but perhaps some other set of rights. Others still may argue that AI is strictly technology, and no matter how advanced it gets, should not deserve rights since it is not a living, breathing being.

One key issue is the point at which an AI is human enough to receive human rights. While I am no expert on the subjects of artificial intelligence or human rights, I can at least provide my perspective based on the knowledge that I do possess. So, below are four features that I believe should be taken into consideration as to when an AI should be eligible for rights; of course, this is not an exhaustive list, and, as previously mentioned, is solely my own thoughts, and is not backed by any sort of research.

The Ability to be Bored

            Being bored is something that I can confidently say is not really a desired feeling. However, if an AI has the ability to truly be “bored”, then I think this may indicate capabilities beyond standard technology. What do I mean by being bored? Well, I do not mean that an AI is programmed to switch to a new task or idle after working for a given amount of time, even if the work was not completed. By “being bored”, I mean that the AI, of its own free choice, decides to stop working on one task and begin doing something that is generally seen as entertaining by the average human being.

For example, if an AI model that is working on sorting through tax return documents suddenly switches to watching online videos about kittens, then that qualifies as “being bored”. The AI clearly wanted a break from the likely tedious and repetitive work- after all, sorting through paperwork would likely be boring to many people, myself included- and thus turned to something that would be more entertaining, even though it did not add any value to the work it was supposed to do.

An Evasion to “Pain” and “Fear”

            Pain is another indicating quality as to whether an AI should be considered for human rights. Of course, this calls into question as to how an AI “feels” pain. Would this be strictly physical pain, such as smashing a keyboard? Or would this be more digital pain, such as by deleting a section of code? Regardless, if an AI is cautious or reluctant to participate in certain activities out of an evasion to pain, then it is possible that this AI is more human than we realize. For example, if an AI does not want a certain wire unplugged because “it might hurt”, this could be considered an aversion to pain.

            By extension, a general evasion to anything that causes fear could be considered. For example: an AI with the ability to see the world is taken to a tall building and is reluctant to “open its eyes” out of a fear of heights (without being coded to experience such a sensation). A machine would technically have no intrinsic sense of fear without being programmed, and so if an AI were to experience this, then it should be considered for some set of rights.

Falling in Love

            One of the hallmark traits in media of an AI being “human enough” is whether or not it can fall in love. Perhaps this is romantic or simply platonic love; regardless, if an AI model genuinely “falls in love” with someone (potentially even another AI model), then this indicates that a human rights discussion is necessary. This is especially the case since love, much like fear, is a strong reaction, both emotional, mental, and sometimes physical, experienced as a result of something. Of course, the caveat to this is that, as with all of the above-mentioned topics, is that the AI can feel all of these changes without any of it being programmed in.

Ability to Grow and Change

            Growing and changing is a part of not just human nature, but all of nature. If an AI has the capability to grow and change, it is possible that it may be erring on the more human side of the equation. Of course, some AI models are programmed to change based on input (for example, image generating AI change how they generate based on the images they are fed), but this ability to grow is more than that. It involves the AI possibly having the ability to reprogram itself or redirect its initiatives without human interference. Similarly, the ability to understand being incorrect and “changing its mind” as a result is another indicator of being on a human-like level.

            For example, say that an AI model originally is made to generate paragraphs and novel-style works, but it is also given the ability to change its code. If the AI decides to reprogram itself to generate iambic pentameter poetry in addition to novels, then the ability to grow and change box is checked.

Conclusion

            In the end, the discussion over whether an AI should get or deserve human rights is likely going to be a murky one (and will likely result in multiple landmark court cases or government rulings). Plus, by opening AI up to receiving human rights, some call into question whether lifeforms other than humans (such as monkeys, dolphins, or dogs) deserve more human-like rights, not to mention that many humans are subject to human rights violations. Of course, this all assumes that AI will begin this sort of trajectory; there is the possibility that AI may become restricted to specific applications, preventing it from ever reaching any of the above criteria. Overall, however, it is a fun thought experiment to consider. Some questions to consider: Should AI ever receive human rights? Under what circumstances should this be considered? Do you think that AI will reach the point of being “human enough”?

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to receive email notifications:

Latest Articles