What rights does the United States propose for humans against artificial intelligence

With five principles, the White House published a proposal to mark the use of these systems.

Given the growth in the use of artificial intelligence, authorities in several countries believe that regulation is needed, in this regard, the White House in the United States presented an ‘AI Bill of Rights’.

This plan is not a law that should be applied directly, but rather a series of suggestions that different sectors are already debating. All this with the aim of creating a legal framework so that companies and governments comply with certain guidelines in the creation and implementation of this technology.

“Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services. The systems that are supposed to help patient care have proven to be insecure, ineffective or partial” reported the White Houseadding that some algorithms are used in hiring and credit decisions that generate inequality.

The five proposals to regulate AI

The US government presented five principles to protect humans in the midst of this digital landscape, based on the gap that can be created and that people must come first.

The first is called ‘Safe and effective systems’ and emphasizes the right to security of systems considered ineffective, for which a greater deployment is requested to identify risks and complications in its implementation, that is, to create a plan in the one that is confirmed to be a safe and effective intelligence to use.

The second is ‘Protections Against Algorithmic Discrimination’. With this, it is sought that the systems created do not have any bias and always have fairness, this is because there are cases in which an AI rejects someone due to conditions of race, ethnicity, sex, among others.

With five principles, the White House published a proposal to mark the use of these systems.
With five principles, the White House published a proposal to mark the use of these systems.

The third is ‘Data Privacy’. This is a topic in which they emphasize the need for creators and those who use artificial intelligence to protect people’s data and for them to have the knowledge of what they are giving it for.

The four principle is called ‘Notice and explanation’ and it consists in that a person must be aware that they are using a system of this type, the developers will have to clearly explain what the process consists of and notify of changes.

Finally, the fifth is clear that people should be able to opt out of such systems and have access to “timely human consideration” in the event that a system fails.

The proposal receives criticism

Before the declaration of the White Housefrom the United States Chamber of Commerce They assured that these points can represent a problem for the technology industry.

“There are some recommendations that, if enacted as rules by lawmakers, could hamper the ability of the United States to compete on the global stage,” he said. jordan crenshawvice president of the organization, which will make its own proposal for 2023.

The United States is not the first country to start a debate on this issue of AI regulation. The European Union is already in the process of legislating the AI ​​Law, which will put limits on these systems, a similar process that is going through Brazil since last year.

With five principles, the White House published a proposal to mark the use of these systems.

Companies promise their robots won’t be weapons

In a scenario very similar to this, a group of six robotics companies signed a declaration in which they ensure that their products will not be turned into weapons.

“We pledge not to build our advanced mobility general purpose robots or the software we develop that enable advanced robotics and we will not support others to do so. Where possible, we will carefully review our customers’ intended applications to avoid potential use of weapons,” the committed companies said in a statement.

Those who signed were Boston Dynamics, Agility Robotics, ANYbotics, Clearpath Robotics, Open Robotics, and Unitree Roboticsall this as a warning message against the misuse that certain people or organizations can give to the projects they develop.

Exit mobile version