wordpress blog stats
Connect with us

Hi, what are you looking for?

Now, China plans to regulate algorithms that alter facial and voice data

China’s cyber watchdog has been making several attempts to govern companies’ use of algorithms in the last few months.

Great wall of China

China is planning to regulate content providers that tweak facial and voice data, according to draft rules outlined by the Cyberspace Administration of China (CAC).  The rules call for “deep synthesis service providers and users of deep synthesis services” to respect social ethics, adhere to the “correct political direction” and not disturb public order. The agency has invited people to share their feedback by February 28, 2022.

The rules refer to “deep synthesis” as the technology of producing text, images, audio, video, virtual scenes and other information using algorithms represented by deep learning and virtual reality.

They will cover the following technologies:

  • Text generation, text style conversion, question and answer dialogue to generate or edit text content;
  • Text-to-voice, voice conversion, voice attribute editing, etc. to generate or edit voice content;
  • Music generation, scene sound editing, etc., to generate or edit non-voice content;
  • Face generation, face replacement, character attribute editing, face control, and posture control to generate or edit faces and other biological features in images and video content;
  • Image enhancement and image repair for editing non-biological characteristics of images and video content;
  • Three-dimensional reconstruction for generating or editing virtual scenes.

This may be the first time that a country has attempted to address the problem of “deep fakes” which has become rampant in the last few years. The rules can also be interpreted as another step towards increasing oversight of the country’s tech companies and keeping them in check.

Responsibilities of businesses

  • Authenticate users: Service providers can only provide their services to users who furnish their real identity information.
  • Add signs for trace: Businesses will need technical measures to add signs to the content produced so it can be self-identified and traced back.
  • Dispel rumours: They will have to establish a mechanism to refute rumours in case users use deep synthesis technology to produce, copy, publish and disseminate false information. They will have to report relevant information to online information and other departments for the record.
  • Maintain database: They have to maintain a database based on characteristics to strengthen the management of their information, and review data entered by users to identify illegal content.
  • Evaluate algorithms: They will also have to review, evaluate and verify the mechanism of algorithms, and provide models with non-biometric information editing functions such as faces and voices that may involve national security, social, and public interests.
  • Obtain consent: Separate consent will need to be obtained if service providers provide significant editing functions for face, voice, and other biometric information.
  • Establish systems for audit: Content providers will take care of security by establishing systems such as algorithm mechanism audit, user registration, information content management, data security and personal information protection, protection of minors, education and training of employees, etc.
  • Disclose rules prominently: They have to disclose platform rules and conventions, improve service agreements, remind users of their security obligations, and perform their corresponding managament duties in a conspicuous manner.
  • Self-regulation by industry bodies: The rules call for self-regulation by industry organisations to strengthen discipline and security of content, establish and improve standards, industry guidelines and management systems, urge and guide players to formulate service norms, and accept governmental supervision.

Recourse available to users

The rules mandate that businesses that offer “deep synthesis” services will need to set up a mechanism for users to report complaints conveniently and effectively, while publicising the process of how they dispose of the complaint and the feedback time limit.

The agency has stipulated that violations will result in penalties of up to 100,000 yuan with the minimum fine starting at 10,000 yuan . The penalties will only come into the picture if providers refuse to heed the warning and make corrections suggested by the government. The businesses will be suspended until they make the correction.

“Those who violate these provisions and cause damage to others shall bear civil liability according to law; those who constitute violations of public security administration shall be given administrative penalties for public security and those who constitute crimes shall be investigated for criminal responsibility,” the provisions clarified.

Marking content produced by users

Content providers will have to use “a significant way” to mark content produced by users and remind them that the content is “synthesised”.

Advertisement. Scroll to continue reading.

The rules lay down the following as content which will need to bear a mark:

  • Identify intelligent dialogue, intelligent writing and other services to simulate text generation or editing by natural persons, at the source description of the text;
  • Provide identification in a “reasonable area of audio” in case of editing services for voice generation such as synthetic voice and voice imitation.
  • Highlight the obvious position of the image and content if there are virtual character image and video generation or personal identity characteristics such as face generation, face replacement, face control, attitude control, etc., has been changed.
  • Show the changes in a prominent position in content concerning “immersive fictitative scenes”.

Avoid activities which endanger national security

The rules call for organisations and individuals to not engage in illegal activities such as

  • Endanger national security,
  • Undermine social stability,
  • Disrupt social order,
  • Infringe rights and interests of others,
  • Peddle obscene, pornographic and false information, as well as information that infringes on other people’s reputation rights, portrait rights, privacy rights, and intellectual property rights.

“…shall not produce, copy, publish or disseminate activities that incitement to subvert state power, endanger national security and social stability, or lewdness,” read the rules.

China’s earlier attempt at regulating algorithms

CAC released the draft rules in August last year. These rules sought to regulate the following types of algorithms:

  • generative or synthetic,
  • personalised recommendation,
  • ranking and selection,
  • search filter,
  • dispatching and decision-making.

Here are some of the responsibilities of tech companies using algorithms:

  • Upholding mainstream value orientations, optimising algorithmic recommendation service mechanisms, vigorously disseminating positive energy, and advancing the use of algorithms in the direction of good.
  • Preventing use of unlawful or harmful information as keywords into user interests and the set-up of discriminatory or biased user tags.
  • Making recommendation algorithms used for search results, rankings, selections, push notifications, and other such use cases, transparent and understandable in order to avoid creating a harmful influence on users or triggering controversies or disputes.
  • Maintaining confidentiality of personal information, private information, and commercial secrets.

What should they avoid using these algorithms for?

  • Algorithmic models that go against public order and good customs, such as by leading users to addiction or high-value consumption and companies should regularly examine, verify, assess, and check algorithmic mechanisms, models, data, and application outcomes, etc.
  • Activities harming national security, upsetting the economic order and social order, infringing the lawful rights and interests of other persons, and other such acts prohibited by laws and administrative regulations.
  • Fake accounts, giving false likes, comments, reshares, etc, or engage in traffic hijacking.
  • Self-preferential treatment, and unfair competition by influencing online public opinion, or evading oversight in addition to blocking certain information, over-recommending, and manipulating topic lists or search result rankings. They should not control hot search terms or selections.
  • The algorithms must avoid influencing minors to imitate unsafe behavior, carry out acts violating social norms, or lead minors towards harmful tendencies that affect their physical or mental health.

Also Read:

Have something to add? Post your comment and gift someone a MediaNama subscription.

Written By

I cover several beats such as crypto, telecom, and OTT at MediaNama. I will be loitering at my local theatre and consuming movies by the dozen when I am off work.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

Looking at the definition of health data, it is difficult to verify whether health IDs are covered by the Bill.

News

The accession to the Convention brings many advantages, but it could complicate the Brazilian stance at the BRICS and UN levels.

News

In light of the state's emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?

News

The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.

News

The latest draft is also problematic for companies or service providers that have nothing to with children's data.

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ