San Francisco has banned the use of facial recognition technology by police and government agencies but the ban does not include airports or other federally regulated facilities.
The city’s legislative body, the Board of Supervisors, proposed the ban due to the possibility the technology could endanger civil rights and civil liberties, which they claim outweighs its benefits.
The ban was part of broader legislation setting use of surveillance systems.
All but one of the nine members of San Francisco's Board of Supervisors endorsed the legislation.
The law states facial recognition could also exacerbate racial injustice and threaten the ability to "live free of continuous government monitoring."
Some also worry the technology could misidentify innocent people as being criminals and that systems can infringe on privacy in everyday life.
The American Civil Liberties Union has warned that facial recognition "can be used in a passive way that doesn't require the knowledge, consent, or participation of the subject."
"The biggest danger is that this technology will be used for general, suspicionless surveillance systems."
However, supporters of the technology argue that facial recognition systems can help police fight crime.
The technology has been credited with helping police capture dangerous criminals but has also been criticised for mistaken identity.
San Francisco is home to tech companies such as Facebook, Twitter, Uber and Google parent Alphabet.
A similar ban is being considered in the nearby city of Oakland.
In China, authorities are using a vast system of facial recognition technology to track its Uighur Muslim minority across the country, according to a recent story in the New York Times.
Beijing has already attracted widespread criticism for its treatment of Uighurs in the northwest region of Xinjiang, where up to one million members of mostly Muslim Turkic-speaking minority groups are held in internment camps, according to estimates cited by a UN panel.
But according to the article, facial recognition technology - integrated into China's huge networks of surveillance cameras - has been programmed to look exclusively for Uighurs based on their appearance and keep records of their movements across China.
It is thought to be the first known example of a government intentionally using artificial intelligence for racial profiling.