Attempting to know the unknowable. Plain and simple.
Modern clad coins and bronze and zinc coins are in an absolute surplus today and have been for a very long time. Coins individually and en masse are very low value. Nowadays, people tend to set Coins aside and it may be months or years between usages for any individual coins. But we don't notice any shortfall because there are simply years and years of excess coins produced and pumped into the money and banking system. The coins are still there in people's hands, but they get set aside for a very long time between uses.
I do enjoy reading some of the practical reminiscences of guys slightly older than me who were teens and young men in the 1960s.
@jmlanzaf said:
Those are wonderful graphs that have zero meaning. You've essentially assumed a rate of attrition and then graphed it. Just for fun, double the rate of attrition and graph it again. It will be just as pretty and just as meaningless. The models do not contain the black swan events (composition changes, hoarding of key dates etc.)
Your comments indicate a lack of awareness of the power of statistics for systems involving large populations and how simple models often produce illuminating results. The curve for the Buffalo Nickel is consistent with my experiences as a coin collector in the 1960s. I wasn't alive when the other designs were being phased out, but the curves seem reasonable to me. If you think you can do better, I would be interested to see your results. I used a simple model that could admittedly be improved, but my interest was simply to gain some basic insights.
@BillDugan1959 said:
Attempting to know the unknowable. Plain and simple.
Modern clad coins and bronze and zinc coins are in an absolute surplus today and have been for a very long time. Coins individually and en masse are very low value. Nowadays, people tend to set Coins aside and it may be months or years between usages for any individual coins. But we don't notice any shortfall because there are simply years and years of excess coins produced and pumped into the money and banking system. The coins are still there in people's hands, but they get set aside for a very long time between uses.
I do enjoy reading some of the practical reminiscences of guys slightly older than me who were teens and young men in the 1960s.
What does any of this have to do with a model for coin populations before clad coins were introduced?
@jmlanzaf said:
Those are wonderful graphs that have zero meaning. You've essentially assumed a rate of attrition and then graphed it. Just for fun, double the rate of attrition and graph it again. It will be just as pretty and just as meaningless. The models do not contain the black swan events (composition changes, hoarding of key dates etc.)
Your comments indicate a lack of awareness of the power of statistics for systems involving large populations and how simple models often produce illuminating results. The curve for the Buffalo Nickel is consistent with my experiences as a coin collector in the 1960s. I wasn't alive when the other designs were being phased out, but the curves seem reasonable to me. If you think you can do better, I would be interested to see your results. I used a simple model that could admittedly be improved, but my interest was simply to gain some basic insights.
Wow. What is your background?
Your comments indicate a lack of awareness about the fact that only systems behaving randomly take advantage of the power of statistics.
To wit:
Assume you have a 3 story house. Main living floor. Bedrooms on the 2nd floor. Storage area in the attic. Your random statistical model would inform you that you spend 33% of your time on each floor, ignoring the fact that you don't randomly walk around your house just ending up wherever you end up. A proper model would include the fact that you likely only spend 1% of your time in the attic. Having 1 billion stories in your house does not fix the problem.
If you'd like another example: there are 10,000 addresses in your town. You work at one of them. Because you are moving around randomly, you only spend 1 out of 10,000 days at your actual work address. The good news is that you probably get to spend 500/10k days at a Starbuck's.
You are assuming a strictly random model when there are many non-random events.
Even if it were random, you've chosen an attrition rate that may have no bearing in reality. You are assuming a 4% attrition rate for coins is, first of all just an assumption. Why not 1%? Why not 5%? Again, having a sample size of billions doesn't fix the problem that the attrition rate should be 5% not 4% or whatever.
But, worse than that, it ignores the fact that removal from circulation is not strictly random all the time. In 1965, the attrition rate for coins was probably closer to 90% not 4% due to the gathering up of the silver.
1909 coins of all varieties disappeared from circulation at a MUCH higher rate than 1910 coins. 1950D nickels had a one year attrition rate of probably 90%.
The silver war nickels had a much higher attrition rate post-1945 than the nickel nickels.
Not that it should matter, but I have a Ph.D. in mathematics and 35 years of experience in mathematical modeling, applications of statistics, and data analysis. A simple analysis often sheds a great deal of light on a problem, and this is the case for coin attrition. The basic facts are correct in the model, and everything you propose is either irrelevant or would be a minor perturbation.
The above curve for Buffalo Nickels corresponds to a 1% attrition rate. As one would expect, it has the same qualitative features as the curve for the 4% case but with different levels. I decided to assume a 4% attrition rate in a previous post because (a) someone else suggested that value and (b) it produces a curve that is consistent with my experiences as a coin collector in the early 1960s. The effects of collectors on a few specific dates won't have much of an effect on the population of a design that was produced for many years.
@CaptHenway said:
During the Great Depression a lot of people spent family heirloom coins to survive. I wrote an article in COINage Magazine >a few years back about how the Mint Reports of that era showed a jump in obsolete coins such as large cents and even a >few half cents being returned to the Treasury for redemption. Other older coins circulated for a while until some collector >grabbed them up again.
In 1933, the US Treasury received 360,090 US twenty-cent pieces to be withdrawn from circulation. In 1932 it was 65 pieces and 1934 it was 73 pieces.
Numismatist Ordinaire See http://www.doubledimes.com for a free online reference for US twenty-cent pieces
@cinclodes said:
The above curve for Buffalo Nickels corresponds to a 1% attrition rate. As one would expect, it has the same qualitative features as the curve for the 4% case but with different levels. I decided to assume a 4% attrition rate in a previous post because (a) someone else suggested that value and (b) it produces a curve that is consistent with my experiences as a coin collector in the early 1960s. The effects of collectors on a few specific dates won't have much of an effect on the population of a design that was produced for many years.
The curious thing here is that many people collected buffalo nickels. A significant portion of their 2+% attrition rate was caused by people setting aside singles, rolls, and collections of buffalo nickels as a hobby or a distraction from life and the economic depression. Many of these coins that were set aside are still around today because attrition rates on collections are far lower than on those in circulation.
Modern coins were not saved and they were not pulled out of circulation to make collections. The few collections that were made had a high attrition because everyone believes they are common and worthless. In addition the lack of saved coins the beat-up, worn out, and badly made coins remaining in circulation have a much higher attrition rate.
People don't notice that things get used up and destroyed until they're just about all gone. The new series (cents, nickels, and quarters) serve to highlight this simple fact. Soon enough the old series will be gone from circulation!
Not that it should matter, but I have a Ph.D. in mathematics and 35 years of experience in mathematical modeling, applications of statistics, and data analysis. A simple analysis often sheds a great deal of light on a problem, and this is the case for coin attrition. The basic facts are correct in the model, and everything you propose is either irrelevant or would be a minor perturbation.
Really? The fact that 90% of silver coins left the market over 2 years is a "minor perturbation" of your fixed attrition model?
The fact that the attrition of wheat cents accelerated by an order of magnitude in the decade following the release of the memorial reverse is a "minor perturbation" or is it "irrelevant".
The fact that first-year and last-year attrition is an order of magnitude higher than other years, clearly irrelevant.
Your model is so simplistic as to be irrelevant and, in many cases, the entire model represents a "minor perturbation".
Your credentials remains suspect if you think that large data sets overcome the problem of non-random variations.
Why don't you apply the same model to human lives. Assume a 4% attrition rate and see if it accurately predicts anything. Even with 7 billion people in the sample size, your model is worse than useless since death is not a random event. The "attrition rate" in your 80s will be significantly higher than your teens. Your "attrition rate" during WW II will be significantly higher than the 1950s and 1960s. Your "attrition rate" in developing countries will be significantly different than in developed countries and the population is not even distributed among the different nations.
You could create a model. You have not created a model.
Run your model for the 1900s and see if it predicts the almost complete disappearance of silver during the Civil War. It won't, because it ignores non-random attrition. But, of course, 90% of the silver disappearing in 1861/1862 and then all of it re-emerging in 1865/1866 is just a "minor perturbation" or "irrelevant".
@CaptHenway said:
During the Great Depression a lot of people spent family heirloom coins to survive. I wrote an article in COINage Magazine >a few years back about how the Mint Reports of that era showed a jump in obsolete coins such as large cents and even a >few half cents being returned to the Treasury for redemption. Other older coins circulated for a while until some collector >grabbed them up again.
In 1933, the US Treasury received 360,090 US twenty-cent pieces to be withdrawn from circulation. In 1932 it was 65 pieces and 1934 it was 73 pieces.
These is clearly a "minor perturbation" and "irrelevant". Graphs don't lie.
@jmlanzaf said:
Those are wonderful graphs that have zero meaning. You've essentially assumed a rate of attrition and then graphed it. Just for fun, double the rate of attrition and graph it again. It will be just as pretty and just as meaningless. The models do not contain the black swan events (composition changes, hoarding of key dates etc
Which one of the grumpy old men do you most closely identify with Matthau or Lemon?
100% positive transactions with SurfinxHI, bigole, 1madman, collectorcoins, proofmorgan, Luke Marshall, silver pop, golden egg, point five zero,coin22lover, alohagary, blaircountycoin,joebb21
@jmlanzaf said:
Those are wonderful graphs that have zero meaning. You've essentially assumed a rate of attrition and then graphed it. Just for fun, double the rate of attrition and graph it again. It will be just as pretty and just as meaningless. The models do not contain the black swan events (composition changes, hoarding of key dates etc
Which one of the grumpy old men do you most closely identify with Matheua or Lemon?
LOL. I'm not grumpy. The man questioned my credentials.
Anyone who quotes "large sample size" as a defense of ignoring non-random events NEEDS to be questioned.
100% positive transactions with SurfinxHI, bigole, 1madman, collectorcoins, proofmorgan, Luke Marshall, silver pop, golden egg, point five zero,coin22lover, alohagary, blaircountycoin,joebb21
Really? The fact that 90% of silver coins left the market over 2 years is a "minor perturbation" of your fixed attrition model?
There are major problems with your criticism.
First off silver began being removed in 1962 and gradually increased. This is virtually the same thing as saying that the attrition increased starting in 1962. Some silver was "collectible" but much of the silver removed was random. When silver went over and stayed over face value silver was no longer a viable currency due to Gresham's Law. In 1967 the final 50% of the silver began disappearing and then it went fast when the FED started removing it mid-1968.
The fact that the attrition of wheat cents accelerated by an order of magnitude in the decade following the release of the memorial reverse is a "minor perturbation" or is it "irrelevant".
This isn't consistent with what I saw. Wheat cents didn't really start disappearing much faster than older memorials until at least 1968 or so. Even then the attrition didn't get extremely high until they started getting "scarce" around 1973. They are still .2% but these are not continually circulating.
The fact that first-year and last-year attrition is an order of magnitude higher than other years, clearly irrelevant.
A lot of the coins pulled out in the first year are returned to circulation later. Note the 1909 Lincoln is usually seen heavily circulated and were quite common in the 1950's.
Your model is so simplistic as to be irrelevant and, in many cases, the entire model represents a "minor perturbation".
The model needs a lot of tweaking but it is still very instructive for people who are always asking why mintages are so high. They are high because old coins go away due to fire, flood, and misadventure.
Why don't you apply the same model to human lives.
You need to explain this concept to Egyptologists. They don't seem to get it.
Run your model for the 1900s and see if it predicts the almost complete disappearance of silver during the Civil War. It won't, because it ignores non-random attrition. But, of course, 90% of the silver disappearing in 1861/1862 and then all of it re-emerging in 1865/1866 is just a "minor perturbation" or "irrelevant".
Obviously attrition rates change and evolve over time. Valuable items tend to have a 1% rate and non-perishable things with very limited value can be 5% or even more (like 1983 pennies). Something with a low rate can increase and something with a very high rate can decrease. Again consider an MS-68 1983 penny. It started at 75% and is now down to less than 1%.
There are ways (observation) to adjust the numbers to better model reality but even the crude numbers presented have some validity and can provide some insight into why rare coins are rare. Isn't that kindda like what we're all here for?
I'm not going to dignify some of the recent comments with a reply, but I would like to thank those who made intelligent contributions to this discussion.
Really? The fact that 90% of silver coins left the market over 2 years is a "minor perturbation" of your fixed attrition model?
There are major problems with your criticism.
First off silver began being removed in 1962 and gradually increased. This is virtually the same thing as saying that the attrition increased starting in 1962. Some silver was "collectible" but much of the silver removed was random. When silver went over and stayed over face value silver was no longer a viable currency due to Gresham's Law. In 1967 the final 50% of the silver began disappearing and then it went fast when the FED started removing it mid-1968.
The fact that the attrition of wheat cents accelerated by an order of magnitude in the decade following the release of the memorial reverse is a "minor perturbation" or is it "irrelevant".
This isn't consistent with what I saw. Wheat cents didn't really start disappearing much faster than older memorials until at least 1968 or so. Even then the attrition didn't get extremely high until they started getting "scarce" around 1973. They are still .2% but these are not continually circulating.
The fact that first-year and last-year attrition is an order of magnitude higher than other years, clearly irrelevant.
A lot of the coins pulled out in the first year are returned to circulation later. Note the 1909 Lincoln is usually seen heavily circulated and were quite common in the 1950's.
Your model is so simplistic as to be irrelevant and, in many cases, the entire model represents a "minor perturbation".
The model needs a lot of tweaking but it is still very instructive for people who are always asking why mintages are so high. They are high because old coins go away due to fire, flood, and misadventure.
Why don't you apply the same model to human lives.
You need to explain this concept to Egyptologists. They don't seem to get it.
Run your model for the 1900s and see if it predicts the almost complete disappearance of silver during the Civil War. It won't, because it ignores non-random attrition. But, of course, 90% of the silver disappearing in 1861/1862 and then all of it re-emerging in 1865/1866 is just a "minor perturbation" or "irrelevant".
Obviously attrition rates change and evolve over time. Valuable items tend to have a 1% rate and non-perishable things with very limited value can be 5% or even more (like 1983 pennies). Something with a low rate can increase and something with a very high rate can decrease. Again consider an MS-68 1983 penny. It started at 75% and is now down to less than 1%.
There are ways (observation) to adjust the numbers to better model reality but even the crude numbers presented have some validity and can provide some insight into why rare coins are rare. Isn't that kindda like what we're all here for?
Actually, you confirm most all of my criticisms. I didn't put a date on the wheat cent withdrawal, but it was not a constant attrition once they started being pulled. Whether silver was pulled between 62 and 67 or 65/66 is also a minor perturbation as it again demonstrates that the attrition was not constant in that period.
1st year/last year hoarding is the smallest affect, but it is still an example of accelerated attrition.
I have no argument with the idea that coins undergo natural attrition. I don't even have a problem with the idea that coins undergo relatively constant attrition for long periods of time. However, that rate has been ASSUMED based on nothing.
The result is a model that doesn't tell you anything quantitative it all. Yes, it shows coins undergoing attrition at an arbitrarily assumed rate.
@cinclodes said:
I'm not going to dignify some of the recent comments with a reply, but I would like to thank those who made intelligent contributions to this discussion.
LMFAO
You started it by questioning my credentials. But, I'll leave it alone. I've made my point. I even posted some actual scientific studies on coin attrition. My work here is done.
I have no argument with the idea that coins undergo natural attrition. I don't even have a problem with the idea that coins undergo relatively constant attrition for long periods of time. However, that rate has been ASSUMED based on nothing.
I would be more interested in seeing best fit models assuming things like higher attrition at the end. Buffalos are interesting since the attrition was relatively constant but it was a lot higher than 1% and lower than 3%.
I still find the presented curves interesting since they demonstrate a surprising similarity no matter the assumed rate. This would explain why I've always had difficulty computing a rate for clads despite their relative constancy. The FED computation looks too high to me for the early years and too low for later years.
I have no argument with the idea that coins undergo natural attrition. I don't even have a problem with the idea that coins undergo relatively constant attrition for long periods of time. However, that rate has been ASSUMED based on nothing.
I would be more interested in seeing best fit models assuming things like higher attrition at the end. Buffalos are interesting since the attrition was relatively constant but it was a lot higher than 1% and lower than 3%.
I still find the presented curves interesting since they demonstrate a surprising similarity no matter the assumed rate. This would explain why I've always had difficulty computing a rate for clads despite their relative constancy. The FED computation looks too high to me for the early years and too low for later years.
the shape is going to be approximately the same in all cases. The real issue is that it completely misstates "attrition" post design change. For example, somewhere around 40% or 50% of all wheat cents STILL EXIST. The 4% attrition rate would have roughly 60% being attrited in 25 years, but that's not what happened. You lost maybe 2 or 3% per year due to wear, damage and loss and then you had the remaining 40 or 50% get pulled from circulation in largely a 10 or 15 year period. At best, there were two different rates: pre-memorial and post-memorial.
The same, in spades, for the silver coinage. Maybe there is a 2 or 3% attrition rate while the silver was standard. But there was something like a 20% or greater attrition rate starting in 1963 or 64 when the started moving to clad.
And if you are really to model survivorship, then the first-year, last-year effects are HUGE as well as other hoarding effects. Sure, maybe only 50% of all Jefferson nickels still exist, but the survival of 50-D nickels is probably 90%.
The model doesn't even consider the two different attrition rates: damage/loss vs. hoarding. What percentage of pre-1982 Memorial cents are out of circulation but being hoarded because of the copper?
If you remove the numbers on the axes, the graphs tell a partial QUALITATIVE story: a new design increases as a % of circulating coinage and then decreases when the design changes. But even the shape of the curve from that model is flawed as the % increases initially due to at least two MAJOR effects: minting of the new design and hoarding/removal of the old design. That hoarding/removal rate will NOT be the same as the normal damage/wear rate. Those two different rates are NOT in the model.
But you could easily graph the qualitative tale with a pencil and paper and no math. Draw a curved line up at date of introduction and then draw a curved line down at the date that a new design takes over. The rest of it borders on nonsense. And it is worse than useless for specific key date coins even if it is at all reasonable for a 1960s Lincoln cent during the following 20 or 30 year period.
You'd also need to consider the differing velocities for different denominations as well as the different way they were handled. Silver dollars, for example, barely circulated because of the size of the denomination. Cents and half cents were probably frequently lost due to carelessness. It was quite common to turn a large cent into a washer because the price was equivalent, but you would never have used a quarter or 50 cent piece for the same purpose. On the other hand, it was common to use silver dimes and half dimes as buttons but unheard of to use half dollars.
I think it is an interesting problem, but it is not represented by this model.
If you want an interesting currency note: the lifespan of the dollar bill has gone from 3 years to 6 years in the last 10 years due to changes in usage. I would bet that similar changes have occurred with coinage which argues against the constancy of the attrition.
Also from currency: $100 bills have a median lifespan of 15 years, $20 bills 8 years, $5 and $1 bills about 6 years. That is a very different rate of attrition that is tied strictly to the denomination and not the materials of manufacture. For coins, denomination should be an issue but there is also no doubt that a nickel is more durable than a Zincoln cent.
the shape is going to be approximately the same in all cases. The real issue is that it completely misstates "attrition" post design change. For example, somewhere around 40% or 50% of all wheat cents STILL EXIST. The 4% attrition rate would have roughly 60% being attrited in 25 years, but that's not what happened. You lost maybe 2 or 3% per year due to wear, damage and loss and then you had the remaining 40 or 50% get pulled from circulation in largely a 10 or 15 year period. At best, there were two different rates: pre-memorial and post-memorial.
The same, in spades, for the silver coinage. Maybe there is a 2 or 3% attrition rate while the silver was standard. But there was something like a 20% or greater attrition rate starting in 1963 or 64 when the started moving to clad.
And if you are really to model survivorship, then the first-year, last-year effects are HUGE as well as other hoarding effects. Sure, maybe only 50% of all Jefferson nickels still exist, but the survival of 50-D nickels is probably 90%.
The model doesn't even consider the two different attrition rates: damage/loss vs. hoarding. What percentage of pre-1982 Memorial cents are out of circulation but being hoarded because of the copper?
If you remove the numbers on the axes, the graphs tell a partial QUALITATIVE story: a new design increases as a % of circulating coinage and then decreases when the design changes. But even the shape of the curve from that model is flawed as the % increases initially due to at least two MAJOR effects: minting of the new design and hoarding/removal of the old design. That hoarding/removal rate will NOT be the same as the normal damage/wear rate. Those two different rates are NOT in the model.
But you could easily graph the qualitative tale with a pencil and paper and no math. Draw a curved line up at date of introduction and then draw a curved line down at the date that a new design takes over. The rest of it borders on nonsense. And it is worse than useless for specific key date coins even if it is at all reasonable for a 1960s Lincoln cent during the following 20 or 30 year period.
You'd also need to consider the differing velocities for different denominations as well as the different way they were handled. Silver dollars, for example, barely circulated because of the size of the denomination. Cents and half cents were probably frequently lost due to carelessness. It was quite common to turn a large cent into a washer because the price was equivalent, but you would never have used a quarter or 50 cent piece for the same purpose. On the other hand, it was common to use silver dimes and half dimes as buttons but unheard of to use half dollars.
I think it is an interesting problem, but it is not represented by this model.
If you want an interesting currency note: the lifespan of the dollar bill has gone from 3 years to 6 years in the last 10 years due to changes in usage. I would bet that similar changes have occurred with coinage which argues against the constancy of the attrition.
Also from currency: $100 bills have a median lifespan of 15 years, $20 bills 8 years, $5 and $1 bills about 6 years. That is a very different rate of attrition that is tied strictly to the denomination and not the materials of manufacture. For coins, denomination should be an issue but there is also no doubt that a nickel is more durable than a Zincoln cent.
I think a far more interesting graph would be something like the incidence of a 1968-D or 1969 quarter in circulation. These were easily found in 1970 with many even being Unc and most of the rest AU. Despite the huge mintages of '65 to '67 these were still common enough to have several examples in every handful. Over the years millions have been lost and destroyed and further large mintages diluted their incidence. Now they are rarely seen (only one out of three quarters is even an eagle reverse) and when they do appear they are universally heavily worn and typically culls. The poor condition is in part actually caused by the selective removal by collectors since 1999. This was when sales of boards and folders increased with the introduction of the states coins. It would be exceedingly difficult to see the increased attrition in the date because apparently only about two million have been pulled out for collections and this is still a small fraction of both mintage and survivorship. But where it can be more easily seen is in the grade distribution. Where the common dates still lay out in nice even bell curve centered around VG- there is a lot of attenuation in the grades of the scarcer dates. The F's and VF's have been selectively removed for collections leaving behind lower grades and culls.
In some ways the other denominations are even more interesting since they all have much higher attrition rates. It's getting a little tough to find any dime without a mint mark so finding the low mintage 1971 is not at all common. Since they have such a low velocity when you finally do locate a '69 or '71 it just might be a nice VF or even XF.
There's always a lot to see watching coins in circulation but the last 55 years have been especially fascination since collectors have played such a tiny roll in what's there and in what is not. Because of enormous mintages and lack of collector interest the coins have just eroded away due to forces, processes, and the "tyranny of numbers". Every year another few billion coins are destroyed and another few billion made to replace them. WYSIWYG. But few people are looking.
Comments
Attempting to know the unknowable. Plain and simple.
Modern clad coins and bronze and zinc coins are in an absolute surplus today and have been for a very long time. Coins individually and en masse are very low value. Nowadays, people tend to set Coins aside and it may be months or years between usages for any individual coins. But we don't notice any shortfall because there are simply years and years of excess coins produced and pumped into the money and banking system. The coins are still there in people's hands, but they get set aside for a very long time between uses.
I do enjoy reading some of the practical reminiscences of guys slightly older than me who were teens and young men in the 1960s.
Your comments indicate a lack of awareness of the power of statistics for systems involving large populations and how simple models often produce illuminating results. The curve for the Buffalo Nickel is consistent with my experiences as a coin collector in the 1960s. I wasn't alive when the other designs were being phased out, but the curves seem reasonable to me. If you think you can do better, I would be interested to see your results. I used a simple model that could admittedly be improved, but my interest was simply to gain some basic insights.
What does any of this have to do with a model for coin populations before clad coins were introduced?
Wow. What is your background?
Your comments indicate a lack of awareness about the fact that only systems behaving randomly take advantage of the power of statistics.
To wit:
Assume you have a 3 story house. Main living floor. Bedrooms on the 2nd floor. Storage area in the attic. Your random statistical model would inform you that you spend 33% of your time on each floor, ignoring the fact that you don't randomly walk around your house just ending up wherever you end up. A proper model would include the fact that you likely only spend 1% of your time in the attic. Having 1 billion stories in your house does not fix the problem.
If you'd like another example: there are 10,000 addresses in your town. You work at one of them. Because you are moving around randomly, you only spend 1 out of 10,000 days at your actual work address. The good news is that you probably get to spend 500/10k days at a Starbuck's.
You are assuming a strictly random model when there are many non-random events.
Even if it were random, you've chosen an attrition rate that may have no bearing in reality. You are assuming a 4% attrition rate for coins is, first of all just an assumption. Why not 1%? Why not 5%? Again, having a sample size of billions doesn't fix the problem that the attrition rate should be 5% not 4% or whatever.
But, worse than that, it ignores the fact that removal from circulation is not strictly random all the time. In 1965, the attrition rate for coins was probably closer to 90% not 4% due to the gathering up of the silver.
1909 coins of all varieties disappeared from circulation at a MUCH higher rate than 1910 coins. 1950D nickels had a one year attrition rate of probably 90%.
The silver war nickels had a much higher attrition rate post-1945 than the nickel nickels.
Not that it should matter, but I have a Ph.D. in mathematics and 35 years of experience in mathematical modeling, applications of statistics, and data analysis. A simple analysis often sheds a great deal of light on a problem, and this is the case for coin attrition. The basic facts are correct in the model, and everything you propose is either irrelevant or would be a minor perturbation.
The above curve for Buffalo Nickels corresponds to a 1% attrition rate. As one would expect, it has the same qualitative features as the curve for the 4% case but with different levels. I decided to assume a 4% attrition rate in a previous post because (a) someone else suggested that value and (b) it produces a curve that is consistent with my experiences as a coin collector in the early 1960s. The effects of collectors on a few specific dates won't have much of an effect on the population of a design that was produced for many years.
In 1933, the US Treasury received 360,090 US twenty-cent pieces to be withdrawn from circulation. In 1932 it was 65 pieces and 1934 it was 73 pieces.
See http://www.doubledimes.com for a free online reference for US twenty-cent pieces
The curious thing here is that many people collected buffalo nickels. A significant portion of their 2+% attrition rate was caused by people setting aside singles, rolls, and collections of buffalo nickels as a hobby or a distraction from life and the economic depression. Many of these coins that were set aside are still around today because attrition rates on collections are far lower than on those in circulation.
Modern coins were not saved and they were not pulled out of circulation to make collections. The few collections that were made had a high attrition because everyone believes they are common and worthless. In addition the lack of saved coins the beat-up, worn out, and badly made coins remaining in circulation have a much higher attrition rate.
People don't notice that things get used up and destroyed until they're just about all gone. The new series (cents, nickels, and quarters) serve to highlight this simple fact. Soon enough the old series will be gone from circulation!
Really? The fact that 90% of silver coins left the market over 2 years is a "minor perturbation" of your fixed attrition model?
The fact that the attrition of wheat cents accelerated by an order of magnitude in the decade following the release of the memorial reverse is a "minor perturbation" or is it "irrelevant".
The fact that first-year and last-year attrition is an order of magnitude higher than other years, clearly irrelevant.
Your model is so simplistic as to be irrelevant and, in many cases, the entire model represents a "minor perturbation".
Your credentials remains suspect if you think that large data sets overcome the problem of non-random variations.
Why don't you apply the same model to human lives. Assume a 4% attrition rate and see if it accurately predicts anything. Even with 7 billion people in the sample size, your model is worse than useless since death is not a random event. The "attrition rate" in your 80s will be significantly higher than your teens. Your "attrition rate" during WW II will be significantly higher than the 1950s and 1960s. Your "attrition rate" in developing countries will be significantly different than in developed countries and the population is not even distributed among the different nations.
You could create a model. You have not created a model.
Run your model for the 1900s and see if it predicts the almost complete disappearance of silver during the Civil War. It won't, because it ignores non-random attrition. But, of course, 90% of the silver disappearing in 1861/1862 and then all of it re-emerging in 1865/1866 is just a "minor perturbation" or "irrelevant".
These is clearly a "minor perturbation" and "irrelevant". Graphs don't lie.
Which one of the grumpy old men do you most closely identify with Matthau or Lemon?
100% positive transactions with SurfinxHI, bigole, 1madman, collectorcoins, proofmorgan, Luke Marshall, silver pop, golden egg, point five zero,coin22lover, alohagary, blaircountycoin,joebb21
LOL. I'm not grumpy. The man questioned my credentials.
Anyone who quotes "large sample size" as a defense of ignoring non-random events NEEDS to be questioned.
Merry Christmas!
Me I am going with lemon for myself
100% positive transactions with SurfinxHI, bigole, 1madman, collectorcoins, proofmorgan, Luke Marshall, silver pop, golden egg, point five zero,coin22lover, alohagary, blaircountycoin,joebb21
There are major problems with your criticism.
First off silver began being removed in 1962 and gradually increased. This is virtually the same thing as saying that the attrition increased starting in 1962. Some silver was "collectible" but much of the silver removed was random. When silver went over and stayed over face value silver was no longer a viable currency due to Gresham's Law. In 1967 the final 50% of the silver began disappearing and then it went fast when the FED started removing it mid-1968.
This isn't consistent with what I saw. Wheat cents didn't really start disappearing much faster than older memorials until at least 1968 or so. Even then the attrition didn't get extremely high until they started getting "scarce" around 1973. They are still .2% but these are not continually circulating.
A lot of the coins pulled out in the first year are returned to circulation later. Note the 1909 Lincoln is usually seen heavily circulated and were quite common in the 1950's.
The model needs a lot of tweaking but it is still very instructive for people who are always asking why mintages are so high. They are high because old coins go away due to fire, flood, and misadventure.
You need to explain this concept to Egyptologists. They don't seem to get it.
Obviously attrition rates change and evolve over time. Valuable items tend to have a 1% rate and non-perishable things with very limited value can be 5% or even more (like 1983 pennies). Something with a low rate can increase and something with a very high rate can decrease. Again consider an MS-68 1983 penny. It started at 75% and is now down to less than 1%.
There are ways (observation) to adjust the numbers to better model reality but even the crude numbers presented have some validity and can provide some insight into why rare coins are rare. Isn't that kindda like what we're all here for?
I'm not going to dignify some of the recent comments with a reply, but I would like to thank those who made intelligent contributions to this discussion.
Actually, you confirm most all of my criticisms. I didn't put a date on the wheat cent withdrawal, but it was not a constant attrition once they started being pulled. Whether silver was pulled between 62 and 67 or 65/66 is also a minor perturbation as it again demonstrates that the attrition was not constant in that period.
1st year/last year hoarding is the smallest affect, but it is still an example of accelerated attrition.
I have no argument with the idea that coins undergo natural attrition. I don't even have a problem with the idea that coins undergo relatively constant attrition for long periods of time. However, that rate has been ASSUMED based on nothing.
The result is a model that doesn't tell you anything quantitative it all. Yes, it shows coins undergoing attrition at an arbitrarily assumed rate.
LMFAO
You started it by questioning my credentials. But, I'll leave it alone. I've made my point. I even posted some actual scientific studies on coin attrition. My work here is done.
Merry Christmas,
I would be more interested in seeing best fit models assuming things like higher attrition at the end. Buffalos are interesting since the attrition was relatively constant but it was a lot higher than 1% and lower than 3%.
I still find the presented curves interesting since they demonstrate a surprising similarity no matter the assumed rate. This would explain why I've always had difficulty computing a rate for clads despite their relative constancy. The FED computation looks too high to me for the early years and too low for later years.
the shape is going to be approximately the same in all cases. The real issue is that it completely misstates "attrition" post design change. For example, somewhere around 40% or 50% of all wheat cents STILL EXIST. The 4% attrition rate would have roughly 60% being attrited in 25 years, but that's not what happened. You lost maybe 2 or 3% per year due to wear, damage and loss and then you had the remaining 40 or 50% get pulled from circulation in largely a 10 or 15 year period. At best, there were two different rates: pre-memorial and post-memorial.
The same, in spades, for the silver coinage. Maybe there is a 2 or 3% attrition rate while the silver was standard. But there was something like a 20% or greater attrition rate starting in 1963 or 64 when the started moving to clad.
And if you are really to model survivorship, then the first-year, last-year effects are HUGE as well as other hoarding effects. Sure, maybe only 50% of all Jefferson nickels still exist, but the survival of 50-D nickels is probably 90%.
The model doesn't even consider the two different attrition rates: damage/loss vs. hoarding. What percentage of pre-1982 Memorial cents are out of circulation but being hoarded because of the copper?
If you remove the numbers on the axes, the graphs tell a partial QUALITATIVE story: a new design increases as a % of circulating coinage and then decreases when the design changes. But even the shape of the curve from that model is flawed as the % increases initially due to at least two MAJOR effects: minting of the new design and hoarding/removal of the old design. That hoarding/removal rate will NOT be the same as the normal damage/wear rate. Those two different rates are NOT in the model.
But you could easily graph the qualitative tale with a pencil and paper and no math. Draw a curved line up at date of introduction and then draw a curved line down at the date that a new design takes over. The rest of it borders on nonsense. And it is worse than useless for specific key date coins even if it is at all reasonable for a 1960s Lincoln cent during the following 20 or 30 year period.
You'd also need to consider the differing velocities for different denominations as well as the different way they were handled. Silver dollars, for example, barely circulated because of the size of the denomination. Cents and half cents were probably frequently lost due to carelessness. It was quite common to turn a large cent into a washer because the price was equivalent, but you would never have used a quarter or 50 cent piece for the same purpose. On the other hand, it was common to use silver dimes and half dimes as buttons but unheard of to use half dollars.
I think it is an interesting problem, but it is not represented by this model.
If you want an interesting currency note: the lifespan of the dollar bill has gone from 3 years to 6 years in the last 10 years due to changes in usage. I would bet that similar changes have occurred with coinage which argues against the constancy of the attrition.
Also from currency: $100 bills have a median lifespan of 15 years, $20 bills 8 years, $5 and $1 bills about 6 years. That is a very different rate of attrition that is tied strictly to the denomination and not the materials of manufacture. For coins, denomination should be an issue but there is also no doubt that a nickel is more durable than a Zincoln cent.
I think a far more interesting graph would be something like the incidence of a 1968-D or 1969 quarter in circulation. These were easily found in 1970 with many even being Unc and most of the rest AU. Despite the huge mintages of '65 to '67 these were still common enough to have several examples in every handful. Over the years millions have been lost and destroyed and further large mintages diluted their incidence. Now they are rarely seen (only one out of three quarters is even an eagle reverse) and when they do appear they are universally heavily worn and typically culls. The poor condition is in part actually caused by the selective removal by collectors since 1999. This was when sales of boards and folders increased with the introduction of the states coins. It would be exceedingly difficult to see the increased attrition in the date because apparently only about two million have been pulled out for collections and this is still a small fraction of both mintage and survivorship. But where it can be more easily seen is in the grade distribution. Where the common dates still lay out in nice even bell curve centered around VG- there is a lot of attenuation in the grades of the scarcer dates. The F's and VF's have been selectively removed for collections leaving behind lower grades and culls.
In some ways the other denominations are even more interesting since they all have much higher attrition rates. It's getting a little tough to find any dime without a mint mark so finding the low mintage 1971 is not at all common. Since they have such a low velocity when you finally do locate a '69 or '71 it just might be a nice VF or even XF.
There's always a lot to see watching coins in circulation but the last 55 years have been especially fascination since collectors have played such a tiny roll in what's there and in what is not. Because of enormous mintages and lack of collector interest the coins have just eroded away due to forces, processes, and the "tyranny of numbers". Every year another few billion coins are destroyed and another few billion made to replace them. WYSIWYG. But few people are looking.