My eyes are old but I'll still relay on PCGS and CAC to do the work. Of course, my tried eyes have the final decision.
Smoke and mirrors, I'll have to see it to believe it
Many have tried to do computerized grading.
None have actually shown a product that is more accurate than the current human grading.
I don't see anything different here.
That could have been written by ChatGPT being told to make a slide with a bunch of impressive sounding latest technology bullet points. The bit about a perpetual income stream is enough to red-flag the whole thing.
Thanks for the feedback everybody. I ran across an add on Facebook for that company, I googled them and found their website and it made me curious as to how legit they are and whether the things they are describing really are possible or not with the technology that exists today or if they are just scamming people
@Mr_Spud said:
Thanks for the feedback everybody. I ran across an add on Facebook for that company, I googled them and found their website and it made me curious as to how legit they are and whether the things they are describing really are possible or not with the technology that exists today or if they are just scamming people
@Mr_Spud said:
Thanks for the feedback everybody. I ran across an add on Facebook for that company, I googled them and found their website and it made me curious as to how legit they are and whether the things they are describing really are possible or not with the technology that exists today or if they are just scamming people
And the answer is?
The answer is wait and see, but don’t hold your breath. This is based on a combination of what I suspect plus the feedback that the others provided.
I would be extremely curious to see their formula for coin grading. To get AI to do grading they have to have a scientific formula for grading.
The substantial truth doctrine is an important defense in defamation law that allows individuals to avoid liability if the gist of their statement was true.
@RiveraFamilyCollect said:
I would be extremely curious to see their formula for coin grading. To get AI to do grading they have to have a scientific formula for grading.
No, they have to have ground truth for grading so that a neural network can learn a formula for grading.
The substantial truth doctrine is an important defense in defamation law that allows individuals to avoid liability if the gist of their statement was true.
They don't need a formula, they need exemplars, examples, a reference set. (I'm not sure what a "ground truth" was supposed to be exactly. ) The AI would compare the new example to the exemplars in order to"decide".
Of course, the AI is probably a likely as a human to make questionable decisions because of the inability to construct an actual formula. I don't think the people that think true AI will be less variable or "more correct" understand how variable a true AI could be and how very different two VF coins could be.
Ask your favorite AI the same question with slight variation and see how wide a range of responses you get. The neural networks are remarkably plastic.
@Mr_Spud said:
What’s up with this company? Think it’s le
We discussed some version of this on one of the NFT threads. I never could figure out how the "perpetual income stream" worked.
They are surface mapping the coin in 3D. That is data, but it won't be able to assess "eye appeal" and I'm not sure it could tell a weak strike from wear. If you started adding reflectance data as well, I suppose you could approach a decision.
The problem is that there is still a judgment call and I'm not sure we'd like the AI's judgment better than a team of graders. Other than differentiating modern 69sv and 70s, there is always going to be a balancing of unique traits:
Is the toning an asset or a detriment?
Does the luster make up for a slightly distracting mark?
Is that exact mark in that exact location mildly distracting, moderately distracting, or a major distraction?
Digital fingerprinting isn’t too hard. Basically just need to run it through the appropriate image classification methods and get a value back. Then combine that with some other elements.
Coin grading can be done as well but it is trickier. What I would do, but it would be prohibitively expensive, is generate a siamese model using a standard set of coins for grade/type. Then run candidates to see how similar it is to arrive at a grading vector. I mean that isn’t a perfect approach but I think it would get a lot closer than other approaches I’ve heard.
I think the core thing about A.I. grading, or any kind of computer-assisted grading, is not whether or not it will be internally consistent with itself - that is always going to be true, because an algorithm doesn't get tired, or have a bad day at the office, or get distracted with family issues, or any other human thing that can cause fluctuations when humans do things, the "subjectivity" that the website derides. The algorithms work, consistently: input x equals output y, and if you input the exact same x again, you should get the exact same y answer back, every time, so long as you don't alter the algorithms.
The problem is always going to be matching up the A.I.-'s output with the already-established set of human output that is the numismatic grading standard. Because whether an A.I. would grade a coin "better" or "worse" than a human is irrelevant, if the answer is "it will grade them differently".
Which will, ultimately, create two different and competing grading standards. Which will in turn cause marketplace confusion for as long as the two systems remain in competition. The TPG-based system "works", because you can look up "the grade" in a reference book or website and get a price. Throw in an A.I. grading system, and you need separate tables for valuing the A.I. graded coins, with conversion between the two a matter for haggling and arguing.
Waste no more time arguing what a good man should be. Be one. Roman emperor Marcus Aurelius, "Meditations"
It needs to be shown coins of specific grades and told what that coin's grade is. The "right answer" is the ground truth. You can see how we already have a problem here, since the effective grade of a coin being bought, sold, appraised, kept, is always a market grade, which is a proxy for value. The term "true market grade" is a bit of an oxymoron, of course, and an unstable ground truth will be the weak link in any AI classification system.
There needs to be enough different coins of enough different grades so that when it learns, it isn't just memorizing what one coin is (i.e., "overfitting"). The diversity of the training set needs to roughly reflect what the system will see when grading.
Next, you have to show the coin to the network being trained such that your ground truth is salient. A blurry moon shot of a coin you say is 63 may be the truth, but it's meaningless for training the network. You have to be consistent with how this data will be acquired and presented, too. Then when you use it to grade, you have to duplicate how the coin is acquired and fed to the system.
I believe PCGS was surface scanning coins about 10 years ago. I'm pretty sure a company like PCGS will perfect this first, if it can be done. Personally I'm skeptical of a machine determining the difference between weak strike and wear, ignoring strikethroughs and die grease issues, and being able to effectively weigh all of the variables... strong luster with weak strike and many bag marks vs. few bag marks with bad luster and average bag marks (or any permutation of these factors) could probably both get an equal grade from a human. Right now though there is a lot of hype for any company that wants to throw all of those buzzwords into a presentation and ask for money.
Take the blockchain aspect. Not a big deal really. Does it matter to any of us that PCGS uses "normal" databases and not blockchains to store coin certs? I don't.
@ProofCollection said:
I believe PCGS was surface scanning coins about 10 years ago. I'm pretty sure a company like PCGS will perfect this first, if it can be done.
There are other uses for AI that PCGS is probably more interested in, including identifying types and checking for wrong labels on coins. These would be time savers and should reduce mistakes getting back to customers. The cost of handling a single returned mechanical error is much greater than the cost of grading one coin. Being able to quickly identify world types that may not have many experts would cut down on time spent nose in a book.
Besides, if the grading becomes permanently stable (or at least until the model is retrained), then that cuts into your regrade revenue stream.
@ProofCollection said:
I believe PCGS was surface scanning coins about 10 years ago. I'm pretty sure a company like PCGS will perfect this first, if it can be done.
There are other uses for AI that PCGS is probably more interested in, including identifying types and checking for wrong labels on coins. These would be time savers and should reduce mistakes getting back to customers. The cost of handling a single returned mechanical error is much greater than the cost of grading one coin. Being able to quickly identify world types that may not have many experts would cut down on time spent nose in a book.
You don't need AI to do that. All you need is a proper algorithm. That technology is not new.
@Sapyx said:
I think the core thing about A.I. grading, or any kind of computer-assisted grading, is not whether or not it will be internally consistent with itself - that is always going to be true, because an algorithm doesn't get tired, or have a bad day at the office, or get distracted with family issues, or any other human thing that can cause fluctuations when humans do things, the "subjectivity" that the website derides. The algorithms work, consistently: input x equals output y, and if you input the exact same x again, you should get the exact same y answer back, every time, so long as you don't alter the algorithms.
The problem is always going to be matching up the A.I.-'s output with the already-established set of human output that is the numismatic grading standard. Because whether an A.I. would grade a coin "better" or "worse" than a human is irrelevant, if the answer is "it will grade them differently".
Which will, ultimately, create two different and competing grading standards. Which will in turn cause marketplace confusion for as long as the two systems remain in competition. The TPG-based system "works", because you can look up "the grade" in a reference book or website and get a price. Throw in an A.I. grading system, and you need separate tables for valuing the A.I. graded coins, with conversion between the two a matter for haggling and arguing.
The AIs do not necessarily give the same output from the same input always, at least not the current language based ones. Computers do but the AI "thinks" differently.
The other problem with AI coin grading is a common problem with AI Neural Networks in general, and that is that nobody can really explain how the output is derived from the input. You cannot be sure the network hasn't seized on some irrelevant difference in training data...
Just think back to how many people wish they could ask the graders "why"?
-----Burton ANA 50 year/Life Member (now "Emeritus")
It would be great if they could come up with depth and height measurements from scanning the strike/details of a coin. The challenge/hunt would explode in search of that coin that would break the records that would signify it the earliest/first coin struck.
And if they could measure the depth of the mirrors most VEDS coins come with.
And there would likely be depth differences with nicks, gouges and scratches that would show up in a scan of the surfaces.
Precise AI coin grading would be the answer to many serious collectors' prayers!
Maybe we'll see these machines lined up on the walls at the major coin shows. Pop in a coin and with a whorl of bings, buzzes, dings and lights, out comes the answer!
.
Leo 😂
The more qualities observed in a coin, the more desirable that coin becomes!
@Sapyx said:
I think the core thing about A.I. grading, or any kind of computer-assisted grading, is not whether or not it will be internally consistent with itself - that is always going to be true, because an algorithm doesn't get tired, or have a bad day at the office, or get distracted with family issues, or any other human thing that can cause fluctuations when humans do things, the "subjectivity" that the website derides. The algorithms work, consistently: input x equals output y, and if you input the exact same x again, you should get the exact same y answer back, every time, so long as you don't alter the algorithms.
The problem is always going to be matching up the A.I.-'s output with the already-established set of human output that is the numismatic grading standard. Because whether an A.I. would grade a coin "better" or "worse" than a human is irrelevant, if the answer is "it will grade them differently".
Which will, ultimately, create two different and competing grading standards. Which will in turn cause marketplace confusion for as long as the two systems remain in competition. The TPG-based system "works", because you can look up "the grade" in a reference book or website and get a price. Throw in an A.I. grading system, and you need separate tables for valuing the A.I. graded coins, with conversion between the two a matter for haggling and arguing.
The AIs do not necessarily give the same output from the same input always, at least not the current language based ones. Computers do but the AI "thinks" differently.
Classifiers and scoring systems will give identical output each time, assuming the input is identical and the network is not retrained.
@ProofCollection said:
I believe PCGS was surface scanning coins about 10 years ago. I'm pretty sure a company like PCGS will perfect this first, if it can be done.
There are other uses for AI that PCGS is probably more interested in, including identifying types and checking for wrong labels on coins. These would be time savers and should reduce mistakes getting back to customers. The cost of handling a single returned mechanical error is much greater than the cost of grading one coin. Being able to quickly identify world types that may not have many experts would cut down on time spent nose in a book.
You don't need AI to do that. All you need is a proper algorithm. That technology is not new.
No, but training a neural network for this is often easier than developing a custom algorithm.
@Sapyx said:
I think the core thing about A.I. grading, or any kind of computer-assisted grading, is not whether or not it will be internally consistent with itself - that is always going to be true, because an algorithm doesn't get tired, or have a bad day at the office, or get distracted with family issues, or any other human thing that can cause fluctuations when humans do things, the "subjectivity" that the website derides. The algorithms work, consistently: input x equals output y, and if you input the exact same x again, you should get the exact same y answer back, every time, so long as you don't alter the algorithms.
The problem is always going to be matching up the A.I.-'s output with the already-established set of human output that is the numismatic grading standard. Because whether an A.I. would grade a coin "better" or "worse" than a human is irrelevant, if the answer is "it will grade them differently".
Which will, ultimately, create two different and competing grading standards. Which will in turn cause marketplace confusion for as long as the two systems remain in competition. The TPG-based system "works", because you can look up "the grade" in a reference book or website and get a price. Throw in an A.I. grading system, and you need separate tables for valuing the A.I. graded coins, with conversion between the two a matter for haggling and arguing.
The AIs do not necessarily give the same output from the same input always, at least not the current language based ones. Computers do but the AI "thinks" differently.
Classifiers and scoring systems will give identical output each time, assuming the input is identical and the network is not retrained.
That's a classic computer function. The AIs are not as linear in their "thinking". I use them daily at work. If I ask them the same type of question repeatedly, they do not always answer the same.
@Sapyx said:
I think the core thing about A.I. grading, or any kind of computer-assisted grading, is not whether or not it will be internally consistent with itself - that is always going to be true, because an algorithm doesn't get tired, or have a bad day at the office, or get distracted with family issues, or any other human thing that can cause fluctuations when humans do things, the "subjectivity" that the website derides. The algorithms work, consistently: input x equals output y, and if you input the exact same x again, you should get the exact same y answer back, every time, so long as you don't alter the algorithms.
The problem is always going to be matching up the A.I.-'s output with the already-established set of human output that is the numismatic grading standard. Because whether an A.I. would grade a coin "better" or "worse" than a human is irrelevant, if the answer is "it will grade them differently".
Which will, ultimately, create two different and competing grading standards. Which will in turn cause marketplace confusion for as long as the two systems remain in competition. The TPG-based system "works", because you can look up "the grade" in a reference book or website and get a price. Throw in an A.I. grading system, and you need separate tables for valuing the A.I. graded coins, with conversion between the two a matter for haggling and arguing.
The AIs do not necessarily give the same output from the same input always, at least not the current language based ones. Computers do but the AI "thinks" differently.
Classifiers and scoring systems will give identical output each time, assuming the input is identical and the network is not retrained.
That's a classic computer function. The AIs are not as linear in their "thinking". I use them daily at work. If I ask them the same type of question repeatedly, they do not always answer the same.
I develop software for certain medical imaging applications which has AI components to it. A given network, once trained, must produce identical results every time given the same input. Not all AI network architectures are generative in nature.
@ProofCollection said:
I believe PCGS was surface scanning coins about 10 years ago. I'm pretty sure a company like PCGS will perfect this first, if it can be done.
There are other uses for AI that PCGS is probably more interested in, including identifying types and checking for wrong labels on coins. These would be time savers and should reduce mistakes getting back to customers. The cost of handling a single returned mechanical error is much greater than the cost of grading one coin. Being able to quickly identify world types that may not have many experts would cut down on time spent nose in a book.
You don't need AI to do that. All you need is a proper algorithm. That technology is not new.
No, but training a neural network for this is often easier than developing a custom algorithm.
I'm not sure about that. "Identifying types and checking for wrong labels on coins" is just a standard pattern recognition and comparison task. It's far easier to program in the pattern to look for rather than "teach" AI and hope it is "learned" correctly.
@messydesk said:
No, but training a neural network for this is often easier than developing a custom algorithm.
In my mind the algorithm comes first.
Nobel prize in numismatics goes to whoever writes that coin grading algorithm.
The substantial truth doctrine is an important defense in defamation law that allows individuals to avoid liability if the gist of their statement was true.
I’ve said it before and I’ll take the opportunity to say it again. AI grading means standards will constantly change. Increasingly little over time, but forever.
Andy Lustig
Doggedly collecting coins of the Central American Republic.
Visit the Society of US Pattern Collectors at USPatterns.com.
@MrEureka said:
I’ve said it before and I’ll take the opportunity to say it again. AI grading means standards will constantly change. Increasingly little over time, but forever.
They've changed since I have been collecting.
As AI can write its own code as it learns that is where it becomes fluid and potential drastic evolution. This is where a thinly traded environment like classic coins becomes potentially unmanageable IMO. Especially in the higher mint state grades.
@MrEureka said:
I’ve said it before and I’ll take the opportunity to say it again. AI grading means standards will constantly change. Increasingly little over time, but forever.
Standards have always changed. It goes with the territory of being ill-defined. AI grading simply means that the constant change has to be implemented differently from the way it is today.
@ProofCollection said:
I believe PCGS was surface scanning coins about 10 years ago. I'm pretty sure a company like PCGS will perfect this first, if it can be done.
There are other uses for AI that PCGS is probably more interested in, including identifying types and checking for wrong labels on coins. These would be time savers and should reduce mistakes getting back to customers. The cost of handling a single returned mechanical error is much greater than the cost of grading one coin. Being able to quickly identify world types that may not have many experts would cut down on time spent nose in a book.
You don't need AI to do that. All you need is a proper algorithm. That technology is not new.
No, but training a neural network for this is often easier than developing a custom algorithm.
I'm not sure about that. "Identifying types and checking for wrong labels on coins" is just a standard pattern recognition and comparison task. It's far easier to program in the pattern to look for rather than "teach" AI and hope it is "learned" correctly.
Maybe, maybe not. To produce an algorithm (AI or otherwise) to identify types, I might start by training a network to become extremely good at optical character recognition of arbitrary orthographies with arbitrary character rotations. I wouldn't need coins as my only input data here, but it would help to have some. The rest of the input data could be synthetic, including synthesizing embossed lettering on a reflective surface. Once reliable, this network can produce inputs for other tasks, including type recognition, label verification, country of origin, anything that can benefit from being able to accurately read the text on a coin, even one that hasn't previously been seen during training. It is now merely a manageable component that can be integrated with others for higher level tasks.
@Sapyx said:
I think the core thing about A.I. grading, or any kind of computer-assisted grading, is not whether or not it will be internally consistent with itself - that is always going to be true, because an algorithm doesn't get tired, or have a bad day at the office, or get distracted with family issues, or any other human thing that can cause fluctuations when humans do things, the "subjectivity" that the website derides. The algorithms work, consistently: input x equals output y, and if you input the exact same x again, you should get the exact same y answer back, every time, so long as you don't alter the algorithms.
The problem is always going to be matching up the A.I.-'s output with the already-established set of human output that is the numismatic grading standard. Because whether an A.I. would grade a coin "better" or "worse" than a human is irrelevant, if the answer is "it will grade them differently".
Which will, ultimately, create two different and competing grading standards. Which will in turn cause marketplace confusion for as long as the two systems remain in competition. The TPG-based system "works", because you can look up "the grade" in a reference book or website and get a price. Throw in an A.I. grading system, and you need separate tables for valuing the A.I. graded coins, with conversion between the two a matter for haggling and arguing.
The AIs do not necessarily give the same output from the same input always, at least not the current language based ones. Computers do but the AI "thinks" differently.
Classifiers and scoring systems will give identical output each time, assuming the input is identical and the network is not retrained.
That's a classic computer function. The AIs are not as linear in their "thinking". I use them daily at work. If I ask them the same type of question repeatedly, they do not always answer the same.
I develop software for certain medical imaging applications which has AI components to it. A given network, once trained, must produce identical results every time given the same input. Not all AI network architectures are generative in nature.
That's true. Fair enough. But the issue with the coin is the inputs are similar not identical.
@Sapyx said:
I think the core thing about A.I. grading, or any kind of computer-assisted grading, is not whether or not it will be internally consistent with itself - that is always going to be true, because an algorithm doesn't get tired, or have a bad day at the office, or get distracted with family issues, or any other human thing that can cause fluctuations when humans do things, the "subjectivity" that the website derides. The algorithms work, consistently: input x equals output y, and if you input the exact same x again, you should get the exact same y answer back, every time, so long as you don't alter the algorithms.
The problem is always going to be matching up the A.I.-'s output with the already-established set of human output that is the numismatic grading standard. Because whether an A.I. would grade a coin "better" or "worse" than a human is irrelevant, if the answer is "it will grade them differently".
Which will, ultimately, create two different and competing grading standards. Which will in turn cause marketplace confusion for as long as the two systems remain in competition. The TPG-based system "works", because you can look up "the grade" in a reference book or website and get a price. Throw in an A.I. grading system, and you need separate tables for valuing the A.I. graded coins, with conversion between the two a matter for haggling and arguing.
The AIs do not necessarily give the same output from the same input always, at least not the current language based ones. Computers do but the AI "thinks" differently.
Classifiers and scoring systems will give identical output each time, assuming the input is identical and the network is not retrained.
That's a classic computer function. The AIs are not as linear in their "thinking". I use them daily at work. If I ask them the same type of question repeatedly, they do not always answer the same.
I develop software for certain medical imaging applications which has AI components to it. A given network, once trained, must produce identical results every time given the same input. Not all AI network architectures are generative in nature.
That's true. Fair enough. But the issue with the coin is the inputs are similar not identical.
If you repeat the same input -- data acquired that represents the coins -- to the network twice, you should expect identical results. If you have two different input datasets, then you may get slightly different results. If you can't control your data acquisition system sufficiently to get repeatable input, you have another problem.
@Sapyx said:
I think the core thing about A.I. grading, or any kind of computer-assisted grading, is not whether or not it will be internally consistent with itself - that is always going to be true, because an algorithm doesn't get tired, or have a bad day at the office, or get distracted with family issues, or any other human thing that can cause fluctuations when humans do things, the "subjectivity" that the website derides. The algorithms work, consistently: input x equals output y, and if you input the exact same x again, you should get the exact same y answer back, every time, so long as you don't alter the algorithms.
The problem is always going to be matching up the A.I.-'s output with the already-established set of human output that is the numismatic grading standard. Because whether an A.I. would grade a coin "better" or "worse" than a human is irrelevant, if the answer is "it will grade them differently".
Which will, ultimately, create two different and competing grading standards. Which will in turn cause marketplace confusion for as long as the two systems remain in competition. The TPG-based system "works", because you can look up "the grade" in a reference book or website and get a price. Throw in an A.I. grading system, and you need separate tables for valuing the A.I. graded coins, with conversion between the two a matter for haggling and arguing.
The AIs do not necessarily give the same output from the same input always, at least not the current language based ones. Computers do but the AI "thinks" differently.
Classifiers and scoring systems will give identical output each time, assuming the input is identical and the network is not retrained.
That's a classic computer function. The AIs are not as linear in their "thinking". I use them daily at work. If I ask them the same type of question repeatedly, they do not always answer the same.
I develop software for certain medical imaging applications which has AI components to it. A given network, once trained, must produce identical results every time given the same input. Not all AI network architectures are generative in nature.
That's true. Fair enough. But the issue with the coin is the inputs are similar not identical.
If you repeat the same input -- data acquired that represents the coins -- to the network twice, you should expect identical results. If you have two different input datasets, then you may get slightly different results. If you can't control your data acquisition system sufficiently to get repeatable input, you have another problem.
Which is why I think the fingerprinting would work but the grading is going to be variable. It's not the repeatability of the input so much as every coin is going to map differently due to differences in depth of strike, evenness of strike, luster, etc. Two different "XF" coins are not going to provide identical inputs.
@Sapyx said:
I think the core thing about A.I. grading, or any kind of computer-assisted grading, is not whether or not it will be internally consistent with itself - that is always going to be true, because an algorithm doesn't get tired, or have a bad day at the office, or get distracted with family issues, or any other human thing that can cause fluctuations when humans do things, the "subjectivity" that the website derides. The algorithms work, consistently: input x equals output y, and if you input the exact same x again, you should get the exact same y answer back, every time, so long as you don't alter the algorithms.
The problem is always going to be matching up the A.I.-'s output with the already-established set of human output that is the numismatic grading standard. Because whether an A.I. would grade a coin "better" or "worse" than a human is irrelevant, if the answer is "it will grade them differently".
Which will, ultimately, create two different and competing grading standards. Which will in turn cause marketplace confusion for as long as the two systems remain in competition. The TPG-based system "works", because you can look up "the grade" in a reference book or website and get a price. Throw in an A.I. grading system, and you need separate tables for valuing the A.I. graded coins, with conversion between the two a matter for haggling and arguing.
The AIs do not necessarily give the same output from the same input always, at least not the current language based ones. Computers do but the AI "thinks" differently.
Classifiers and scoring systems will give identical output each time, assuming the input is identical and the network is not retrained.
That's a classic computer function. The AIs are not as linear in their "thinking". I use them daily at work. If I ask them the same type of question repeatedly, they do not always answer the same.
I develop software for certain medical imaging applications which has AI components to it. A given network, once trained, must produce identical results every time given the same input. Not all AI network architectures are generative in nature.
That's true. Fair enough. But the issue with the coin is the inputs are similar not identical.
If you repeat the same input -- data acquired that represents the coins -- to the network twice, you should expect identical results. If you have two different input datasets, then you may get slightly different results. If you can't control your data acquisition system sufficiently to get repeatable input, you have another problem.
Which is why I think the fingerprinting would work but the grading is going to be variable. It's not the repeatability of the input so much as every coin is going to map differently due to differences in depth of strike, evenness of strike, luster, etc. Two different "XF" coins are not going to provide identical inputs.
Yes, different coins of the same grade will present differently because there is so much variability in what they look like fresh off the press. If you look at Morgan dollars, which arguably has the largest population available of uncirculated coins for a vintage type, not only do you have huge differences among different dates and mints for what is typical (for example, 78-CC vs 91-O vs 01-P), but even within a date you have different appearances as dies deteriorate. Another reason why the most effective applications for AI will not be grading, as I mentioned elsewhere in this thread.
For coins that have a three point grading scale (70, 69, and No), you're simply counting and scoring defects, not grading like you do other coins. There may be a place for AI here, but this is just as easily done without it. The bigger challenge will be the data acquisition system that feeds the coin info into the grading system and doing it faster, cheaper, and with less risk to the coin itself than a modern bulk grader.
I’d be happy if they just got a digital fingerprint of the coin to somehow allow people to quickly check if a slabbed coin is a counterfeit or not. That, to me, would be a great thing to allow inexperienced people know for sure whether a slabbed coin was legit.
Comments
Seems like a good idea, but we will have to see if it actually works.
Type collector, mainly into Seated. -formerly Ownerofawheatiehorde. Good BST transactions with: mirabela, OKCC, MICHAELDIXON, Gerard
My eyes are old but I'll still relay on PCGS and CAC to do the work. Of course, my tried eyes have the final decision.
Smoke and mirrors, I'll have to see it to believe it
Mike
My Indians
Dansco Set
Many have tried to do computerized grading.
None have actually shown a product that is more accurate than the current human grading.
I don't see anything different here.
That could have been written by ChatGPT being told to make a slide with a bunch of impressive sounding latest technology bullet points. The bit about a perpetual income stream is enough to red-flag the whole thing.
Keeper of the VAM Catalog • Professional Coin Imaging • Prime Number Set • World Coins in Early America • British Trade Dollars • Variety Attribution
The tools are already here for an AI grading program especially for moderns.
I am sure the big two by now are working on something.
If it's as accurate and fast as expected it could be marketed as a better and faster way to determine MS70. Referring to moderns.
Post AI and pre AI could be two entirely different perceived values pertaining to grading moderns.
Thanks for the feedback everybody. I ran across an add on Facebook for that company, I googled them and found their website and it made me curious as to how legit they are and whether the things they are describing really are possible or not with the technology that exists today or if they are just scamming people
Mr_Spud
And the answer is?
Mike
My Indians
Dansco Set
Could you put more buzz words into your description?
ANA 50 year/Life Member (now "Emeritus")
The answer is wait and see, but don’t hold your breath. This is based on a combination of what I suspect plus the feedback that the others provided.
Mr_Spud
I would be extremely curious to see their formula for coin grading. To get AI to do grading they have to have a scientific formula for grading.
The substantial truth doctrine is an important defense in defamation law that allows individuals to avoid liability if the gist of their statement was true.
No, they have to have ground truth for grading so that a neural network can learn a formula for grading.
Keeper of the VAM Catalog • Professional Coin Imaging • Prime Number Set • World Coins in Early America • British Trade Dollars • Variety Attribution
What does that mean, a ground truth for grading?
The substantial truth doctrine is an important defense in defamation law that allows individuals to avoid liability if the gist of their statement was true.
They don't need a formula, they need exemplars, examples, a reference set. (I'm not sure what a "ground truth" was supposed to be exactly. ) The AI would compare the new example to the exemplars in order to"decide".
Of course, the AI is probably a likely as a human to make questionable decisions because of the inability to construct an actual formula. I don't think the people that think true AI will be less variable or "more correct" understand how variable a true AI could be and how very different two VF coins could be.
Ask your favorite AI the same question with slight variation and see how wide a range of responses you get. The neural networks are remarkably plastic.
We discussed some version of this on one of the NFT threads. I never could figure out how the "perpetual income stream" worked.
They are surface mapping the coin in 3D. That is data, but it won't be able to assess "eye appeal" and I'm not sure it could tell a weak strike from wear. If you started adding reflectance data as well, I suppose you could approach a decision.
The problem is that there is still a judgment call and I'm not sure we'd like the AI's judgment better than a team of graders. Other than differentiating modern 69sv and 70s, there is always going to be a balancing of unique traits:
Digital fingerprinting isn’t too hard. Basically just need to run it through the appropriate image classification methods and get a value back. Then combine that with some other elements.
Coin grading can be done as well but it is trickier. What I would do, but it would be prohibitively expensive, is generate a siamese model using a standard set of coins for grade/type. Then run candidates to see how similar it is to arrive at a grading vector. I mean that isn’t a perfect approach but I think it would get a lot closer than other approaches I’ve heard.
I think the core thing about A.I. grading, or any kind of computer-assisted grading, is not whether or not it will be internally consistent with itself - that is always going to be true, because an algorithm doesn't get tired, or have a bad day at the office, or get distracted with family issues, or any other human thing that can cause fluctuations when humans do things, the "subjectivity" that the website derides. The algorithms work, consistently: input x equals output y, and if you input the exact same x again, you should get the exact same y answer back, every time, so long as you don't alter the algorithms.
The problem is always going to be matching up the A.I.-'s output with the already-established set of human output that is the numismatic grading standard. Because whether an A.I. would grade a coin "better" or "worse" than a human is irrelevant, if the answer is "it will grade them differently".
Which will, ultimately, create two different and competing grading standards. Which will in turn cause marketplace confusion for as long as the two systems remain in competition. The TPG-based system "works", because you can look up "the grade" in a reference book or website and get a price. Throw in an A.I. grading system, and you need separate tables for valuing the A.I. graded coins, with conversion between the two a matter for haggling and arguing.
Roman emperor Marcus Aurelius, "Meditations"
Apparently I have been awarded the DPOTD twice.
It needs to be shown coins of specific grades and told what that coin's grade is. The "right answer" is the ground truth. You can see how we already have a problem here, since the effective grade of a coin being bought, sold, appraised, kept, is always a market grade, which is a proxy for value. The term "true market grade" is a bit of an oxymoron, of course, and an unstable ground truth will be the weak link in any AI classification system.
There needs to be enough different coins of enough different grades so that when it learns, it isn't just memorizing what one coin is (i.e., "overfitting"). The diversity of the training set needs to roughly reflect what the system will see when grading.
Next, you have to show the coin to the network being trained such that your ground truth is salient. A blurry moon shot of a coin you say is 63 may be the truth, but it's meaningless for training the network. You have to be consistent with how this data will be acquired and presented, too. Then when you use it to grade, you have to duplicate how the coin is acquired and fed to the system.
Keeper of the VAM Catalog • Professional Coin Imaging • Prime Number Set • World Coins in Early America • British Trade Dollars • Variety Attribution
I believe PCGS was surface scanning coins about 10 years ago. I'm pretty sure a company like PCGS will perfect this first, if it can be done. Personally I'm skeptical of a machine determining the difference between weak strike and wear, ignoring strikethroughs and die grease issues, and being able to effectively weigh all of the variables... strong luster with weak strike and many bag marks vs. few bag marks with bad luster and average bag marks (or any permutation of these factors) could probably both get an equal grade from a human. Right now though there is a lot of hype for any company that wants to throw all of those buzzwords into a presentation and ask for money.
Take the blockchain aspect. Not a big deal really. Does it matter to any of us that PCGS uses "normal" databases and not blockchains to store coin certs? I don't.
http://ProofCollection.Net
There are other uses for AI that PCGS is probably more interested in, including identifying types and checking for wrong labels on coins. These would be time savers and should reduce mistakes getting back to customers. The cost of handling a single returned mechanical error is much greater than the cost of grading one coin. Being able to quickly identify world types that may not have many experts would cut down on time spent nose in a book.
Besides, if the grading becomes permanently stable (or at least until the model is retrained), then that cuts into your regrade revenue stream.
Keeper of the VAM Catalog • Professional Coin Imaging • Prime Number Set • World Coins in Early America • British Trade Dollars • Variety Attribution
You don't need AI to do that. All you need is a proper algorithm. That technology is not new.
http://ProofCollection.Net
The AIs do not necessarily give the same output from the same input always, at least not the current language based ones. Computers do but the AI "thinks" differently.
The other problem with AI coin grading is a common problem with AI Neural Networks in general, and that is that nobody can really explain how the output is derived from the input. You cannot be sure the network hasn't seized on some irrelevant difference in training data...
Just think back to how many people wish they could ask the graders "why"?
ANA 50 year/Life Member (now "Emeritus")
It would be great if they could come up with depth and height measurements from scanning the strike/details of a coin. The challenge/hunt would explode in search of that coin that would break the records that would signify it the earliest/first coin struck.
And if they could measure the depth of the mirrors most VEDS coins come with.
And there would likely be depth differences with nicks, gouges and scratches that would show up in a scan of the surfaces.
Precise AI coin grading would be the answer to many serious collectors' prayers!
Maybe we'll see these machines lined up on the walls at the major coin shows. Pop in a coin and with a whorl of bings, buzzes, dings and lights, out comes the answer!
.
Leo 😂
The more qualities observed in a coin, the more desirable that coin becomes!
My Jefferson Nickel Collection
Why not just do a 3 tier grading system. GOOD - BAD - UGLY and have Clint certify it.
USN & USAF retired 1971-1993
Successful Transactions with more than 100 Members
Classifiers and scoring systems will give identical output each time, assuming the input is identical and the network is not retrained.
Keeper of the VAM Catalog • Professional Coin Imaging • Prime Number Set • World Coins in Early America • British Trade Dollars • Variety Attribution
No, but training a neural network for this is often easier than developing a custom algorithm.
Keeper of the VAM Catalog • Professional Coin Imaging • Prime Number Set • World Coins in Early America • British Trade Dollars • Variety Attribution
That's a classic computer function. The AIs are not as linear in their "thinking". I use them daily at work. If I ask them the same type of question repeatedly, they do not always answer the same.
I develop software for certain medical imaging applications which has AI components to it. A given network, once trained, must produce identical results every time given the same input. Not all AI network architectures are generative in nature.
Keeper of the VAM Catalog • Professional Coin Imaging • Prime Number Set • World Coins in Early America • British Trade Dollars • Variety Attribution
I'm not sure about that. "Identifying types and checking for wrong labels on coins" is just a standard pattern recognition and comparison task. It's far easier to program in the pattern to look for rather than "teach" AI and hope it is "learned" correctly.
http://ProofCollection.Net
In my mind the algorithm comes first.
Nobel prize in numismatics goes to whoever writes that coin grading algorithm.
The substantial truth doctrine is an important defense in defamation law that allows individuals to avoid liability if the gist of their statement was true.
I don't trust them AI people as far as I can throw them.
Pete
I’ve said it before and I’ll take the opportunity to say it again. AI grading means standards will constantly change. Increasingly little over time, but forever.
Doggedly collecting coins of the Central American Republic.
Visit the Society of US Pattern Collectors at USPatterns.com.
They've changed since I have been collecting.
As AI can write its own code as it learns that is where it becomes fluid and potential drastic evolution. This is where a thinly traded environment like classic coins becomes potentially unmanageable IMO. Especially in the higher mint state grades.
Standards have always changed. It goes with the territory of being ill-defined. AI grading simply means that the constant change has to be implemented differently from the way it is today.
Keeper of the VAM Catalog • Professional Coin Imaging • Prime Number Set • World Coins in Early America • British Trade Dollars • Variety Attribution
Even if the algorithm for grading is somehow created and perfected there will still never be an algorithm for eye appeal.
RIP Mom- 1932-2012
Maybe, maybe not. To produce an algorithm (AI or otherwise) to identify types, I might start by training a network to become extremely good at optical character recognition of arbitrary orthographies with arbitrary character rotations. I wouldn't need coins as my only input data here, but it would help to have some. The rest of the input data could be synthetic, including synthesizing embossed lettering on a reflective surface. Once reliable, this network can produce inputs for other tasks, including type recognition, label verification, country of origin, anything that can benefit from being able to accurately read the text on a coin, even one that hasn't previously been seen during training. It is now merely a manageable component that can be integrated with others for higher level tasks.
Keeper of the VAM Catalog • Professional Coin Imaging • Prime Number Set • World Coins in Early America • British Trade Dollars • Variety Attribution
That's true. Fair enough. But the issue with the coin is the inputs are similar not identical.
If you repeat the same input -- data acquired that represents the coins -- to the network twice, you should expect identical results. If you have two different input datasets, then you may get slightly different results. If you can't control your data acquisition system sufficiently to get repeatable input, you have another problem.
Keeper of the VAM Catalog • Professional Coin Imaging • Prime Number Set • World Coins in Early America • British Trade Dollars • Variety Attribution
https://techxplore.com/news/2023-10-deep-neural-networks-dont-world.html
ANA 50 year/Life Member (now "Emeritus")
Which is why I think the fingerprinting would work but the grading is going to be variable. It's not the repeatability of the input so much as every coin is going to map differently due to differences in depth of strike, evenness of strike, luster, etc. Two different "XF" coins are not going to provide identical inputs.
Yes, different coins of the same grade will present differently because there is so much variability in what they look like fresh off the press. If you look at Morgan dollars, which arguably has the largest population available of uncirculated coins for a vintage type, not only do you have huge differences among different dates and mints for what is typical (for example, 78-CC vs 91-O vs 01-P), but even within a date you have different appearances as dies deteriorate. Another reason why the most effective applications for AI will not be grading, as I mentioned elsewhere in this thread.
For coins that have a three point grading scale (70, 69, and No), you're simply counting and scoring defects, not grading like you do other coins. There may be a place for AI here, but this is just as easily done without it. The bigger challenge will be the data acquisition system that feeds the coin info into the grading system and doing it faster, cheaper, and with less risk to the coin itself than a modern bulk grader.
Keeper of the VAM Catalog • Professional Coin Imaging • Prime Number Set • World Coins in Early America • British Trade Dollars • Variety Attribution
I’d be happy if they just got a digital fingerprint of the coin to somehow allow people to quickly check if a slabbed coin is a counterfeit or not. That, to me, would be a great thing to allow inexperienced people know for sure whether a slabbed coin was legit.
Mr_Spud