Post by account_disabled on Mar 7, 2024 4:45:31 GMT
Decision-making process of AI algorithms as well as prior risk assessment of solutions. Interestingly the Project does not require prior authorization as was the case with virtual asset service providers under Law No. .
In other words although there is provision for a regulatory authority to control the supply of artificial intelligence systems maintain a database of functioning systems monitor them and apply sanctions it is not clear whether the control will be a priori concomitant or a posteriori . The legislative option is closer to the LGPD strategy with the definition of artificial intelligence agents gender with two species the supplier and the artificial intelligence system operator.
Some applications are prohibited by the Project B2B Email List such as inducing people to compromise their health and safety or commit illegal acts and uses for public safety purposes are strictly delimited.
The legislative proposal lists fourteen activities considered high risk and the list may be extended by an infralegal regulator. The only high-risk activity related to the financial system is the “assessment of the debt capacity of individuals or establishing their credit rating”.
The proposed standard continues with the stipulation of instrumental duties associated with data governance and control the communication of serious incidents and in the case of high-risk solutions the algorithmic impact assessment.
Finally civil liability does not depart from the Consumer Protection Code regime and is objective in the case of high-risk or excessive-risk systems.
The lack of a prior authorization or licensing procedure requirement appears not to be compatible with the provision of a regulatory sandbox for the prior evaluation of experimental AI solutions. Now if prior authorization is not required for an AI system to be offered what would be the need for a sandbox regime ?
Thus considering the provision for evaluating the risk of systems and monitoring compliance with stipulated duties the regime borrowed from the LGPD does not seem to make sense with the prior authorization regime adopted in the Virtual Assets Law being more appropriate. However this alternative may impose obstacles to rapid market dynamics delaying the supply of AI systems and burdening the regulatory authority with authorization requests that may take a long time to be granted and whose effectiveness may be questionable.
Ideally the Chamber of Deputies should adjust the text to make it clearer when the risk assessment should be carried out the possible algorithmic impact analysis and the fulfillment of the other duties set out in the legislative proposal.
There is no easy answer to the questions mentioned here. However specifically in the case of AI applications considering the insufficiency of information to measure risks and understand the dynamics of algorithms disproportionate regulatory intervention could make innovation unfeasible or alternatively direct entrepreneurship to other jurisdictions.
It is not possible to regulate what you do not understand. For now it is important to ensure the availability of the information necessary for decision-making by authorities which requires constant dialogue between academia companies and the State.
In other words although there is provision for a regulatory authority to control the supply of artificial intelligence systems maintain a database of functioning systems monitor them and apply sanctions it is not clear whether the control will be a priori concomitant or a posteriori . The legislative option is closer to the LGPD strategy with the definition of artificial intelligence agents gender with two species the supplier and the artificial intelligence system operator.
Some applications are prohibited by the Project B2B Email List such as inducing people to compromise their health and safety or commit illegal acts and uses for public safety purposes are strictly delimited.
The legislative proposal lists fourteen activities considered high risk and the list may be extended by an infralegal regulator. The only high-risk activity related to the financial system is the “assessment of the debt capacity of individuals or establishing their credit rating”.
The proposed standard continues with the stipulation of instrumental duties associated with data governance and control the communication of serious incidents and in the case of high-risk solutions the algorithmic impact assessment.
Finally civil liability does not depart from the Consumer Protection Code regime and is objective in the case of high-risk or excessive-risk systems.
The lack of a prior authorization or licensing procedure requirement appears not to be compatible with the provision of a regulatory sandbox for the prior evaluation of experimental AI solutions. Now if prior authorization is not required for an AI system to be offered what would be the need for a sandbox regime ?
Thus considering the provision for evaluating the risk of systems and monitoring compliance with stipulated duties the regime borrowed from the LGPD does not seem to make sense with the prior authorization regime adopted in the Virtual Assets Law being more appropriate. However this alternative may impose obstacles to rapid market dynamics delaying the supply of AI systems and burdening the regulatory authority with authorization requests that may take a long time to be granted and whose effectiveness may be questionable.
Ideally the Chamber of Deputies should adjust the text to make it clearer when the risk assessment should be carried out the possible algorithmic impact analysis and the fulfillment of the other duties set out in the legislative proposal.
There is no easy answer to the questions mentioned here. However specifically in the case of AI applications considering the insufficiency of information to measure risks and understand the dynamics of algorithms disproportionate regulatory intervention could make innovation unfeasible or alternatively direct entrepreneurship to other jurisdictions.
It is not possible to regulate what you do not understand. For now it is important to ensure the availability of the information necessary for decision-making by authorities which requires constant dialogue between academia companies and the State.