June 16, 2017

Identifying as an artist - using artistic tools to generate your photo for your ID

Copied without comment:

Artist says he used a computer-generated photo for his official ID card

A French artist says he fooled the government by using a computer-generated photo for his national ID card.

Raphael Fabre posted on his website and Facebook what he says are the results of his computer modeling skills on an official French national ID card that he applied for back in April.

Le 7 avril 2017, j’ai fait une demande de carte d’identité à la mairie du 18e. Tous les papiers demandés pour la carte étaient légaux et authentiques, la demande a été acceptée et j’ai aujourd’hui ma nouvelle carte d’identité française.

La photo que j’ai soumise pour cette demande est un modèle 3D réalisé sur ordinateur, à l’aide de plusieurs logiciels différents et des techniques utilisées pour les effets spéciaux au cinéma et dans l’industrie du jeu vidéo. C’est une image numérique, où le corps est absent, le résultat de procédés artificiels.

L’image correspond aux demandes officielles de la carte : elle est ressemblante, elle est récente, et répond à tous les critères de cadrage, lumière, fond et contrastes à observer.

Le document validant mon identité le plus officiellement présente donc aujourd’hui une image de moi qui est pratiquement virtuelle, une version de jeu vidéo, de fiction.
Le portrait, la planche photomaton et le récépissé de demande ont été montrés à la Galerie R-2 pour l'Exposition Agora

On April 7, 2017, I made a request for identity card at the city hall of the 18th. All the papers requested for the card were legal and authentic, the request was accepted and I now have my new French Identity Card.

The Photo I submitted for this request is a 3 D model on computer, using several different software and techniques used for special effects in cinema and in the video game industry. This is a digital image, where the body is absent, the result of artificial processes.

The image corresponds to the official requests of the card: it is similar, it is recent, and meets all the criteria for framing, light, background and contrast to be observed.

The document validating my most official identity is now an image of me that is practically virtual, a video game version, fiction.

The Portrait, the photo booth and the request receipt were shown at the r-2 Gallery for the Exposition Agora

The image is 100 percent artificial, he says, made with special-effects software usually used for films and video games. Even the top of his body and clothes are computer-generated, he wrote (in French) on his website.

But it worked, he says. He followed guidelines for photos and made sure the framing, lighting, and size were up to government standards for an ID. And voila: He now has what he says is a real ID card with a CGI picture of himself.

"Absolutely everything is retouched, modified, and idealized"
He told Mashable France that the project was spurred by his interest in discerning what's artificial and real in the digital age. "What interests me is the relationship that one has to the body and the image ... absolutely everything is retouched, modified, and idealized. How do we see the body and identity today?" he said.

In an email, he said that the French government isn't aware yet of his project that just went up on Facebook earlier this week, but "it is bound to happen."

Before he received the ID, the CGI portrait and his application were on display at an Agora exhibit in Paris through the beginning of May.

Now, if the ID is real, it looks like his art was impressive enough to fool the French government.

Posted by iang at June 16, 2017 05:21 AM

A face photo is biometric ID. Biometric ID can be faked. The taking of the photo needs to be done by someone other than the person being identified. The person taking the photo needs to already be trusted.

Thawte notaries, and people in the PGP web of trust, looked at state issued ID for the people they verified. Instead of looking at state ID, people verifying identity need to be taking photos of biometric features - face, hands, fingerprints, ears, eyes, + retina scans, etc.. As in the PGP web of trust, this effort should be duplicated, with the more duplication, the better.

I would add that the measure of trust for people verifying ID would ideally be insurance or a bond posted. Falsely verifying ID should open the verifier to claims against their insurance or bond.

Posted by: Vincent Youngs at June 16, 2017 02:25 PM

So, as an artist project, the artist asks "what is Identity?"

In the above, is the photo "Identity" ? And what is identifying about the photo?

In practice, the thing we want, assuming we believe in the notion of Identity as encapsulated in the ID card is that the photo distinguishes that person from all others. To some degree of accuracy.

Nothing in the above description says that the photo isn't of the person. Specifically, it might be a good likeness. If it is a good likeness, then it achieves its job. If it is not a good likeness, it doesn't do the job.

The artist cunningly buried that question "elle est ressemblante" leading the reader to focus on the technology used as being a trick.

This is the trap the artist has laid: *all* photos are done using some technology. This is just another technology. All images are done using some method, by definition. To use another method is called innovation, it's not called forgery or fake or fooling.

Coming back to your notary/thawte/cacert/wot angle - yes I do think we should do that. Not to isolate whether the photo is made with a camera but to show whether it is a useful distinguisher. The person is the goal of identity, not the technology.

Posted by: Iang at June 17, 2017 02:41 AM

Consider the oposite.

Consider to take a person and to use makeup (at least where I come from, it's not forbidden to use makeup in photos for id-cards) so that one resembles someone else. Or to have a a different sex or quite some different age. I know this is possible. It's what makeup artists do, to change your face to something else. Cosplayers are already getting extremely good at this.

But even "normal" women know a lot about how to improve their images. Even how to let parts of your face look bigger or smaller or even going slightly differently.

So imagine that I add this kind of makeup, including another style and colour for my hair and maybe different glasses. Nothing of this would probably violate the rules for those pictures.

So the picture would be of me. It would be a picture of a real person. Of the correct person. Who followed the rules.

Would this solve the issue?

People would probably match the picture to somebody else. And possibly even computers could be fooled about some of the biometric features.

Actually a picture of a really good virutal copy could be a better picture for identification of the person than "my" picture.

Also so far there is no biometric feature (beside of probably DNA) that cannot be faked with relatively easy efforts. And you easily could add those to your virtual avatar as well.

Having the original photo done by a trusted person possibly would prevent the vitutal avatar approach. But the makeup-version would work.

But now to the idea that for pgp/thwarte etc biometric features should be taken. For what means? The question that is answered in this process is not "who is the person" but "is the person the one on the id" or not. So what does it help to take features at that time?

Nothing. Because the answer is either "yes its the person" or no signature.

If there is a signature, others rely on that answer and for example send the data, money or whatever it's about. The others who rely are NOT in touch with the person and could not check against any features that would be stored by all those other people / verifyers.

Also what does it help it the verifyers if they collect all that data? All they possibly could do would be to compare with other collections. If at all. But it does not help to say which version is the correct one. It only possibly can show that there is conflicting information. But the reason could also be some fault by some prior verifyer.

But either you now compare the data - and only allow copies of the first entry - then you lose the multiple independent (!) datapoints, which is meant to be behind the WoT approach. or you don't dompare the data. Then your accumulation of data does not help in any way.

Sure, it helps to make the verifyer responsible. It's the answer CAcert gave. But it only helps so far to ensure that the verifyers honestly believe in the observations. It does not help if those observations are false. If the veryfiers are fooled.

Back to original post: I think it's interesting to think the artist did a good job to demonstrate an important question. What is real about your identity.

Posted by: Eva at June 18, 2017 06:50 AM

An artistic rendering loses some details, and jpeg compression loses some details, but the difference is whether human judgement was involved in the choice of which details to lose. Within human consciousness, there is a difference between a non preferential, neutral, Zen attitude, which merely witnesses without judging, and a biased attitude which has preferences about what it observes. The artist is biased. The artist has chosen which details to omit based on preferences, based on an idealized vision, presumably because the idealized vision was more pleasing to his ego. In court, it is desired that witnesses report accurately what they have seen, not omitting detail based on personal pteferences. The ideal witness is a Zen master. A camera is more like a Zen master, because it does not have ego preferences.

Posted by: Vincent Youngs at June 18, 2017 06:53 AM

Responding to Eva:
Our posts were at the same time, so I had not seen yours when mine posted.

If face photos are taken from multiple angles, and a 3D construction generated, that would defeat most makeup illusion. Also, I envision that many biometric details be captured, hands, fingerprints, ears, iris scans, etc., not just the face.

The images captured by muliple verifiers may differ in some details, but that generally should not matter if there is no intent to deceive. The way I envision a system working, all the biometric data would be published to a blockchain, each verifier posting their captured data. To have ID, would require bond or insurance posted by both the verifier and the person being verified. The bond or insurance would pay a reward to the first person to file a successful dispute challenging any intent at deception. The dispute would be filed to the blockchain and decided by arbitration. If arbitration decides there was intent at deception, the person who filed the challenge would recieve a reward paid for by the bond or insurance of the person judged to have intended to deceive. That should provide plenty of disincentive against the use of deceptive makeup.

Posted by: Vincent Youngs at June 18, 2017 07:17 AM

Also, answering the argument that it doesn't matter what technology is used to create an image for ID, I think it is reasonable for a community governance to set standards about what technology is used. The standard may be arbitrary, and may change over time, but the decision would be made by whatever governance process a community uses, taking into consideration tradeoffs between accuracy and cost. Artist renderings have higher cost and lower accuracy, so it seems unlikely that very many communities would choose that technology to capture biometric data for ID purposes.

Posted by: Vincent Youngs at June 18, 2017 08:27 AM

@Vincent - you may be interested in this post on Steemit which is designed for blockchain governance, and includes arbitration:

On a Principled Approach to Blockchain Governance - 7 Requirements


One of the things that I learnt in the CAcert adventure was that governance was critical to the safe operation of large communities. How large is large ... is a question of much debate, but to put it bluntly, this is needed beyond say a 2 digit size, which I’ve always seen as around 30, but certainly well before Dunbar’s number of 150.

Now, the problem with governance is that once it’s in place, it becomes power. And power corrupts. So as technologists we like to try and invent things that take the power back out of the hands of the powerful, the corrupted, and place them into the technology. Tech can’t be corrupted, right? That’s a joke...

Some might call this decentralisation, but that places a technique to the fore, not the purpose. We don’t decentralise for the sake of it, we do that because we want its benefits, which work sometimes and sometimes not.

Principle - Automate what we can, govern the rest


Posted by: (Iang) A principled approach to governance at June 18, 2017 12:05 PM

I had seen that post about blockchain governance. I still do not see any solution for the problem of power corrupting people, resulting in government becoming unbalanced. As you rightly point out, decentralized governance can become corrupted just as easily as any other form of governance.

People who seek power will almost always find a way to get more of it no matter what form of governance is setup. Is there any form of governance to turn that upside down, and automatically take power away from those who seek it?

People can be motivated by love or power. People motivated by love do not seek power. Is there any form of governance that can give power to those motivated by love, even though they do not want it?

Posted by: Vincent Youngs at June 18, 2017 03:05 PM

@ Vincent (to your first answer on my post):

It would not create incentive to stop faking. It only would increase the incentive to improve your fake ... or to pay anybody who checks your id.

AND it would increase the incentive to challenge. So it would spread distrust. Anybody would have to fear to be dragged into this even when trying their best. Who would want to be part of such a system? Any such challenge would damage you it would leave a mark.

If you possibly gain if you challenge. Or get out without harm, you just challenge as much as you can. Anything, everywhere. Nice income for little effort.

That would reduce trust in arbitration and would not help the system.

You don't want false challenges. You want reliability on the id. Random false challenges only reduce felt reliability. And they also would blur the rightful challenges.

(I probably should disclose that I worked as assurer (verifyer) and as arbitrator in CAcert. In that second role, I also resolved challenges of the kind that you describe.)

Posted by: Eva at June 19, 2017 07:49 AM

@ Vincent (your last answer to Ian)

Love does not exclude aim for power. On the contrary, there are a multitude of stories about the concept that people want to control what they love in fear that it is destroyed. Or at least that they want to have power to access to what they love.

A lot of people who abuse their partner or child claim that they wholeheartedly and truly love their partner or child. It's a tragedy to all involved.

Love often causes envy which is about power. And love can turn to hate.

Love also does not ensure the best treatment or even good treatment of the loved. Anybody who has parents probably made that experience at some point of their live.

I'm not sure, but I don't think that love causes less problems, fights, conflicts or disputes than power.

Posted by: Eva at June 19, 2017 08:12 AM

But coming back to the original post.

So the artist showed that the approach with storing and comparing pictures probably is broken. Others also showed that tools easily can be fooled about biometric features. And humans regularly fail to even recognize them without such tools.

So shouldn't be the question if such features are able to "capture" the identity that we aim for, instead of just "solving" the problem that it does not work with just adding/multiplying more of the same?

Posted by: Eva at June 19, 2017 08:50 AM

@ Eva re: love

The word love has so many different meanings that your argument that love just causes more problems is true with some popular meanings of the word. There isn't a word I know of that conveys the meaning I want. It is the state of consciousness of spiritual masters, Jesus, Buddha, etc., which does not have attachment, does not attempt to control, yet does care. It is a state beyond ego, where the other is felt as being part of oneself, a state of unity, but combined with a Zen attitude, the attitude of a neutral witness. Very few people experience this in its pure form, but everybody has some trace of this consciousness. So it is a sliding scale, not a binary yes / no, about whether somebody is motivated by love or power. When people are identified with the ego, feeling separate from the external world, then they strive to control the external world. That is power. When people achieve a higher state of consciousness, they transcend the ego, and because they have unity with the external world, they can deal with its problems by going within.

This also describes the ideal person to resolve conflict, the characteristics that a mediator or arbitrator should have.

Regarding your argument that people would just file lots of challenges - of course there would be a fee to file a challenge, sufficient to pay all costs associated with processing the challenge. If the challenge succeeds, the fee is refunded. If the challenge fails, the fee is forfeit. So that would mean that people would only file a challenge if they had sufficient certainty to risk losing the fee. If people still file too many false challenges, the fee can be raised to a punitive level until the false challenges stop.

Regarding your argument that people would bribe verifiers, if the verifier has sufficient bond or insurance at risk, that would be a disincentive to cheat. If it is found that verifiers are still cheating, it just means the insurance or bond is not enough. The amount required for insurance or bond can be adjusted upward until cheating stops.

@ Ian re: blockchain governance

If a decentralized identity system as robust as I have described is used, the blockchain can be permissioned with a random method of choosing who signs the next block. This would be better than proof of work or proof of stake. Furthermore, it would be possible to limit block signing permission only to the group of arbiters. This would solve your concern about a conflict of power between the arbiters and the block verifiers - the arbiters would be the block verifiers too.

This would give arbiters all the power in the system. Then it would make it necessary to have a really good method of determining who is and isn't an arbiter.

Posted by: Vincent Youngs at June 21, 2017 07:51 AM

1. avatar
You try to solve the issue of the creation of id document based on a virtual avatar instead of a person by creating a virtual avatar of the person and verifying the person against that virtual avatar.

The best match to the virtual avatar however would be the virtual avatar itself. (And not the original person!) And that virtual avatar would be publicly available, which just is the basis of your idea.

But the person always will not match this avatar as well. Because the person changes over time. We are aging and we are developing. So you still always have to expect variance.

2. fees
You first state that you want to make it appealing to challenge. And you argue that your system would work exactly because of this.

Now you say that you want to add a fee for doing so. By this you remove the appeal to challenge.

Probably there is a possible balance in there. But it's neither of the two simple approaches you name. You have to raise but also control and deal with different interest.

If challenging is possibly profitable you have to balance at least incentives for:
- original person
- possible faker
- verifyer (at least two)
- challenger
- arbitrator
... beside of a possible damaged person or group

3. technique
But you probably have to balance another person as well.

Because you want to build your system on a multitude of biometric features. You have to collect and proof them. Again and again and again.

Normal humans are not able to do this. Some may be able to do it for some features after training. But in general you will need some (expensive) tools. As well as room for special light settings and whatever. So you need facilities and programs.

Somebody has to create and maintain them. You have to control and protect any single unit of them. And you have to balance the maintainer of those. Or is it the same person as the verifyer?

4. love
If you say the "love" element is extremely seldom and nearly never appears it's not sensible to base your system on the fact that somebody like this will be there.

And who will be the judge that the person really is of the nature you describe? If this judgement is off you are possibly screwed, as you cannot easily replace that person.

5. all the power to the arbitrator
As already hinted in another answer. The idea and power behind an approach like pgp and web of trust - and that was what you wanted to create - is that everybody can and will be a node.

The aim is to have as many as possible independent checks done by a multitude of people and that those checks are also done over each other.

At best a user of the WoT wants to have done as many checks as possible themselves to be sure about the signatures of those that were checked by themselves.

If you remove that option. But even more if you reduce the data points and if you especially ensure that the data points that you collect are not independent any more, you lose the web.

And if you concentrate the power to a smaller group (the well chosen arbitrators) and also only allow those to review the verifications done among themselves, you lose the possible independence of that step, as well.

But even if they would be truthful and honest, how should they manage the workload? Both of verifying and of dealing with the challenges?


Be careful to build a system on the idea that some perfect people are out there to which everybody would agree that those are perfect people. And be careful to assume that if those perfect people exist, that they would be willing to be "used" for your system.

Be also careful to build a system on technical accumulation of data, that creates a separated self-repeating image of people. Especially if every step to get there already is proven to be easy to break.

Probably better think about the resource that you have in plenty. Normal, imperfect people who have goals of their own. ;-)

Posted by: Eva at June 21, 2017 09:00 AM

@ Eva

I will use your numbers for my response.

1. I don't know where the avatar concept came from. I didn't say it. My ID would consist of my name, my biometric photos, my public key, my insurance or bond, and my trust links to other people. I like Zooko's triangle. My name need not be globally unique, but my public key would be, and all my other data.

2. Fees would create a risk. A successful challenge would pay a reward. That creates a reward/risk ratio. When the challenger has sufficient certainty about fraudulent activity, the reward/risk ratio will be high enough to induce a challenge. When the challenger lacks sufficient certainty, the reward/risk ratio will be too low to act upon, and no challenge would be filed.

3. The method of capturing and comparing biometric data can be low cost. A common digital camera would suffice. Photos of face from various angles, photos of ears, both sides of hands, close up of eyes. There exists open source software for comparing face photos. That could be used to find close matches on faces. When close face matches are found, the other features can be compared manually.

4. My thoughts about love and the ideal state of consciousness are not to say that this is a requirement. Of course that won't work to require perfection of anyone. It is a thought construct to think about what kind of system would elevate a person in power the purer their love. People vary in this, so how can their power in the system vary with the purity of their love? Also, in a decentralized system there would not be any central authority to judge the purity of anybody's love. But people judge their peers in various ways. Is there a way that the purity of somebody's state of consciousness can be deduced from the judgements of their peers? Of course no such evaluation would be perfect, but is there a method that would work well enough? People know at some level, conscious or subconscious, whether others are relating to them with power or love. That will reflect in peer judgements, especially if a well crafted way to express that judgement is part of the system.

5. It is possible to give arbiters all power. I did not necessarily say I advocate that. I was just throwing it out as a possibility. With a permissioned blockchain, it is possible to divide power however you want. You could have 3 branches of government, and let them alternately sign every third block, or they could each keep their own blockchains, which exert mutual control over each other. Not saying I advocate that either. It is just another possibility.

It is also possible to have anyone serve as an arbiter, but to have their level and their rating depend on their peer trust links. There could be a hierarchy of arbiters by their ratings. This still would be a decentralized system because the arbiter ratings would be calculated in a decentralized way. Decisions by lower level arbiters could be appealed to higher level arbiters, provided somebody was willing to pay the higher fees. Higher level arbiters would charge more to hear cases, so they would generally deal only with the higher value cases. The work load for arbiters would be moderated by prices (arbiter fees) and the law of supply and demand. Each individual arbiter would set their own fees.

Posted by: Vincent Youngs at June 21, 2017 05:02 PM

@ Vincent

Yeah you can try to build a extremely complex thing about this with a multitude of branches and levels and all. Only you would need to find enough people to participate.

Also you claim that the core issue that would corrupt the system, would be aim for power and personal gain. But every time you answer, you only add more of the same: money and power.

But in no step you ever addressed the core issue. That the data obviously cannot be trusted if we only look at those. This is not changed by adding more and more complex government systems.

And you never explained what the accumulation of that data should help for in the core question that any of the involved people face at any specific point. (Until then, technical detail for accumulation / compassion are irrelevant).

Posted by: Eva at June 22, 2017 03:57 AM

@ Eva
The thread started about technology used to ascertain a person's identity to the rest of society, with artist rendering being one possible technology. I proceeded to advocate my own method for decentralized identity. I think my method for decentralized identity would work even though I do not have an answer to the deeper question of how to structure governance so that it does not become corrupted by power.

Market mechanisms work, that is why I think in terms of them. Money is a tool that can be used for either love or power. It is neutral in and of itself.

Posted by: Vincent Youngs at June 22, 2017 11:39 AM
Post a comment

Remember personal info?

Hit preview to see your comment as it would be displayed.