$${ {what could we assume instead of Developing Quality Backlinks} }$$
General
Calculus
Relational
Arrow
Set
Geometry
Logic
Greek
Misc
[⋯]
{⋯}
(⋯)
|⋯|
∥⋯∥
$${}$$
Size
X
x

About 10,000 results in 1.43 seconds.

$$. So we can rewrite the right-hand side of equation 1 as$$

## Does the Bell theorem assume reality? | Page 7

... $$. So we can rewrite the right-hand side of equation 1 as$$ \frac{1}{N} \sum_n (A_n B_n + A_n B_n B_n C_n) = \frac{1}{N} \sum_n (A_n B_n (1 + B_n C_n))$$3. Taking absolute values, we get:$$\frac{1}{N} |\sum_n (A_n B_n (1 + B_n C_n))| \leq \frac{1}{N} \sum_n |A_n B_n| |1+B_n C_n| $$4. Since$$|A_n B_n| =...
$$LCR=\frac{\mbox{High quality liquid assets}}{\mbox{Total net liquidity outflows over 30 days}} \geq 100\%$$

## Basel III

https://en.wikipedia.org/wiki/Basel_III
...bank holding companies.[8] Liquidity requirements introduced two required liquidity ratios.[9] The "Liquidity Coverage Ratio" was supposed to require a bank to hold sufficient high-quality liquid assets to cover its total net cash outflows over 30 days. Mathematically it is expressed as follows: $$LCR=\frac{\mbox{High quality liquid assets}}{\mbox{Total net liquidity outflows over 30 days}} \geq 100\%$$ The Net Stable Funding Ratio was to require the available amount of stable funding to exceed the required amount of stable funding over a one-year period of extended stress.[10] US version of the Basel Liquidity Coverage Ratio requirements On 24 October 2013, the Federal Reserve Board of Governor...
$$0.36 per m³, which is more than double of 1996.[30] The NWRB found an average tariff of US$$

## Water supply and sanitation in the Philippines

https://en.wikipedia.org/wiki/Water_supply_and_sanitation_in_the_Philippines
...r Districts. In water districts, tariffs increased notably since 1996. The tariff structure is similar to the model used in Metro Manila, with an average tariff for the first 10m³ and increasing tariffs for additional consumption.[31] At the end of 2006, the national average tariff for 30 m³ was US $$0.36 per m³, which is more than double of 1996.[30] The NWRB found an average tariff of US$$ 0.41 within a sample of 18 water districts in 2004, which is the highest average tariff of all management models. The average connection fee was US$$55, somewhat lower than among private operators.[29] Metro Manila. In the capital region, an initial tariff is to be paid for the first 10 m³ consumed, with increasing blocks for additional consumption. Furthermore, consumers connected to sewerage pay an additional charge of 50and all users must pay a 10environmental surcharge.[32] For new consumers, a connection fee is charged, which was US$$ 134 in April 2007 in the East Zone[33] For new consumers, a connection fee is charged, which was US...
$$150 million bonds paying 10interest. The Piper Company will then pass the$$

## Currency swap

https://en.wikipedia.org/wiki/Currency_swap
...ement 1: The British Petroleum Company will issue 5-year £100 million bonds paying 7.5% interest. It will then deliver the £100 million to the swap bank who will pass it on to the U.S. Piper Company to finance the construction of its British distribution center. The Piper Company will issue 5-year $$150 million bonds paying 10interest. The Piper Company will then pass the$$ 150 million to swap bank that will pass it on to the British Petroleum Company who will use the funds to finance the construction of its U.S. refinery. Agreement 2: The British company, with its U.S. asset (refinery), will pay the 10% interest on...
$$|d\rangle is the state that is spin-down. Let's suppose that$$

## Does the EPR experiment imply QM is incomplete? | Page 6

... $$|d\rangle is the state that is spin-down. Let's suppose that$$ |0\rangle$$is the initial "ready" state of the device, and let$$|U\rangle$$mean "measured spin-up" and$$|D\rangle$$mean "measured spin down". Those are sometimes called "pointer" states. So the assumption that the device actually works as a measuring device is that:$$|u\rangle |0\rangle \Rightarrow |u\rangle |U\rangle |d\rangle |0\rangle \Rightarrow |d\rangle |D\rangle$$(where$$\Rightarrow$$means "evolves into, taking into account the Schrodinger equation") By linearity of the Schrodinger equation, it follows that:$$(\alpha |u\rangle + \beta |d\rangle)|0\rangle \Rightarrow \alpha |u\rangle |U\rangle + \beta |d\rangle |D\rangle...
$$although\ it\ gets\ tiresome\ to\ add\\ \ one\ of\ these\ slashes\ after\ each\ and\ every\ word\ it\ considerably\ slowers\ my\ typing\ speed\ :(\\$$

## PF Random Thoughts Part 2 | Page 106

...wing in the wind. http://earth.nullschool.net/ Very nice, I wonder if I can make it into a wallpaper Jan 8, 2014 #2,649 zoobyshoe 6,265 1,280 Mandelbroth said: It's stands for Escherichia Coli. Which means it was invented by M.C. Escher, right? Jan 8, 2014 #2,650 Enigman 662 307 lendav_rott said: $$although\ it\ gets\ tiresome\ to\ add\\ \ one\ of\ these\ slashes\ after\ each\ and\ every\ word\ it\ considerably\ slowers\ my\ typing\ speed\ :(\\$$ Try using ## instead of [tex] and ~ instead of \ . Prev 1 … Go to page Go 104 105 106 107 108 … Go to page Go 134 Next First Prev 106 of 134 Go to page Go Next Last Log in or register to reply now! Related Threads on PF Random Thoughts Part 2 Random Thoughts part 6 Last Post Yesterday, 7:12 PM...
$$380 monthly salary was about half the amount needed for his modest living expenses and Arline's medical bills, and they were forced to dip into her$$

## Richard Feynman

https://en.wikipedia.org/wiki/Richard_Feynman
...ase of natural logarithms, e = 2.71828 ...), and found that the three filing cabinets where a colleague kept research notes all had the same combination. He left notes in the cabinets as a prank, spooking his colleague, Frederic de Hoffmann, into thinking a spy had gained access to them. Feynman's $$380 monthly salary was about half the amount needed for his modest living expenses and Arline's medical bills, and they were forced to dip into her$$ 3,300 in savings. On weekends he drove to Albuquerque to see Arline in a car borrowed from his friend Klaus Fuchs. Asked who at Los Alamos was most likely to be a spy, Fuchs mentioned Feynman's safe cracking and frequent trips to Albuquerque; Fuchs himself later confessed to spying for the Soviet...
$$1, the entry fee will be 1/number of tickets. For simplicity, we will consider the interest rate to be 0, so that the present value of$$

## Risk-neutral measure

https://en.wikipedia.org/wiki/Risk-neutral_measure
... $$1, the entry fee will be 1/number of tickets. For simplicity, we will consider the interest rate to be 0, so that the present value of$$ 1 is $$1. Thus the An(0) 's satisfy the axioms for a probability distribution. Each is non-negative and their sum is 1. This is the risk-neutral measure! Now it remains to show that it works as advertised, i.e. taking expected values with respect to this probability measure will give the right price at time 0. Suppose you have a security C whose price at time 0 is C(0). In the future, in a state i, its payoff will be Ci. Consider a portfolio P consisting of Ci amount of each Arrow security Ai. In the future, whatever state i occurs, then Ai pays$$ 1 while the other Arrow securities pay...
$$\mbox{how is this supposed to work, though?}$$

## PF Random Thoughts Part 2 | Page 105

... $$\mbox{how is this supposed to work, though?}$$ aah, the webpage doesn't display the code right away, makes me think I messed something up. Last edited: Jan 7, 2014 Jan 7, 2014 #2,619 lisab Staff Emeritus Science Advisor Gold Member 1,887 616 All this...
$$({\color{Blue}g}^{\color{Red}a}\bmod {\color{Blue}p})^{\color{Red}b}\bmod {\color{Blue}p} = ({\color{Blue}g}^{\color{Red}b}\bmod {\color{Blue}p})^{\color{Red}a}\bmod {\color{Blue}p}$$

## Diffie–Hellman key exchange

https://en.wikipedia.org/wiki/Diffie–Hellman_key_exchange
... $$({\color{Blue}g}^{\color{Red}a}\bmod {\color{Blue}p})^{\color{Red}b}\bmod {\color{Blue}p} = ({\color{Blue}g}^{\color{Red}b}\bmod {\color{Blue}p})^{\color{Red}a}\bmod {\color{Blue}p}$$ Only a and b are kept secret. All the other values – p, g, ga mod p, and gb mod p – are sent in the clear. The strength of the scheme comes from the fact that gab mod p = gba mod p take extremely long times to compute by any known algorithm just from the knowledge of p, g, ga mod p, and gb mod p....
$$10 million) was cited by the NSA as an argument for Dual_EC_DRBG's acceptance into the NIST SP 800-90A standard.[2] RSA Security subsequently cited Dual_EC_DRBG's acceptance into the NIST standard as a reason they used Dual_EC_DRBG.[42] Daniel R. L. Brown's March 2006 paper on the security reduction of Dual_EC_DRBG mentions the need for more output truncation and a randomly chosen Q, but mostly in passing, and does not mention his conclusions from his patent that these two defects in Dual_EC_DRBG together can be used as a backdoor. Brown writes in the conclusion: "Therefore, the ECRNG should be a serious consideration, and its high efficiency makes it suitable even for constrained environments." Note that others have criticised Dual_EC_DRBG as being extremely slow, with Bruce Schneier concluding "It's too slow for anyone to willingly use it",[4] and Matthew Green saying Dual_EC_DRBG is "Up to a thousand times slower" than the alternatives.[5] The potential for a backdoor in Dual_EC_DRBG was not widely publicised outside of internal standard group meetings. It was only after Dan Shumow and Niels Ferguson's 2007 presentation that the potential for a backdoor became widely known. Shumow and Ferguson had been tasked with implementing Dual_EC_DRBG for Microsoft, and at least Furguson had discussed the possible backdoor in a 2005 X9 meeting.[15] Bruce Schneier wrote in a 2007 Wired article that the Dual_EC_DRBG's flaws were so obvious that nobody would be use Dual_EC_DRBG: "It makes no sense as a trap door: It's public, and rather obvious. It makes no sense from an engineering perspective: It's too slow for anyone to willingly use it."[4] Schneier was apparently unaware that RSA Security had used Dual_EC_DRBG as the default in BSAFE since 2004. OpenSSL implemented all of NIST SP 800-90A including Dual_EC_DRBG at the request of a client. The OpenSSL developers were aware of the potential backdoor because of Shumow and Ferguson's presentation, and wanted to use the method included in the standard to choose a guarantied non-backdoored P and Q, but was told that to get FIPS 140-2 validation they would have to use the default P and Q. OpenSSL chose to implement Dual_EC_DRBG despite its dubious reputation for completeness, noting that OpenSSL tried to be complete and implements many other insecure algorithms. OpenSSL did not use Dual_EC_DRBG as the default CSPRNG, and it was discovered in 2013 that a bug made the OpenSSL implementation of Dual_EC_DRBG non-functioning, meaning that no one could have been using it.[37] Bruce Schneier reported in December 2007 that Microsoft added Dual_EC_DRBG support to Windows Vista, though not enabled by default, and Schneier warned against the known potential backdoor.[43] Windows 10 and later will silently replace calls to Dual_EC_DRBG with calls to CTR_DRBG based on AES.[44] On September 9, 2013, following the Snowden leak, and the New York Times report on the backdoor in Dual_EC_DRBG, the National Institute of Standards and Technology (NIST) ITL announced that in light of community security concerns, it was reissuing SP 800-90A as draft standard, and re-opening SP800-90B/C for public comment. NIST now "strongly recommends" against the use of Dual_EC_DRBG, as specified in the January 2012 version of SP 800-90A.[45][46] The discovery of a backdoor in a NIST standard has been a major embarrassment for the NIST.[47] RSA Security had kept Dual_EC_DRBG as the default CSPRNG in BSAFE even after the wider cryptographic community became aware of the potential backdoor in 2007, but there does not seem to have been a general awareness of BSAFE's usage of Dual_EC_DRBG as a user option in the community. Only after widespread concern about the backdoor was there an effort to find software which used Dual_EC_DRBG, of which BSAFE was by far the most prominent found. After the 2013 revelations, RSA security Chief of Technology Sam Curry provided Ars Technica with a rationale for originally choosing the flawed Dual EC DRBG standard as default over the alternative random number generators.[48] The technical accuracy of the statement was widely criticized by cryptographers, including Matthew Green and Matt Blaze.[28] On December 20, 2013, it was reported by Reuters that RSA had accepted a secret payment of$$

## Dual_EC_DRBG

https://en.wikipedia.org/wiki/Dual_EC_DRBG
...ve been interpreted as suggesting that the NSA backdoored , with those making the allegation citing the NSA's work during the standardization process to eventually become the sole editor of the standard.[7] The early usage of by RSA Security (for which NSA was later reported to have secretly paid $$10 million) was cited by the NSA as an argument for Dual_EC_DRBG's acceptance into the NIST SP 800-90A standard.[2] RSA Security subsequently cited Dual_EC_DRBG's acceptance into the NIST standard as a reason they used Dual_EC_DRBG.[42] Daniel R. L. Brown's March 2006 paper on the security reduction of Dual_EC_DRBG mentions the need for more output truncation and a randomly chosen Q, but mostly in passing, and does not mention his conclusions from his patent that these two defects in Dual_EC_DRBG together can be used as a backdoor. Brown writes in the conclusion: "Therefore, the ECRNG should be a serious consideration, and its high efficiency makes it suitable even for constrained environments." Note that others have criticised Dual_EC_DRBG as being extremely slow, with Bruce Schneier concluding "It's too slow for anyone to willingly use it",[4] and Matthew Green saying Dual_EC_DRBG is "Up to a thousand times slower" than the alternatives.[5] The potential for a backdoor in Dual_EC_DRBG was not widely publicised outside of internal standard group meetings. It was only after Dan Shumow and Niels Ferguson's 2007 presentation that the potential for a backdoor became widely known. Shumow and Ferguson had been tasked with implementing Dual_EC_DRBG for Microsoft, and at least Furguson had discussed the possible backdoor in a 2005 X9 meeting.[15] Bruce Schneier wrote in a 2007 Wired article that the Dual_EC_DRBG's flaws were so obvious that nobody would be use Dual_EC_DRBG: "It makes no sense as a trap door: It's public, and rather obvious. It makes no sense from an engineering perspective: It's too slow for anyone to willingly use it."[4] Schneier was apparently unaware that RSA Security had used Dual_EC_DRBG as the default in BSAFE since 2004. OpenSSL implemented all of NIST SP 800-90A including Dual_EC_DRBG at the request of a client. The OpenSSL developers were aware of the potential backdoor because of Shumow and Ferguson's presentation, and wanted to use the method included in the standard to choose a guarantied non-backdoored P and Q, but was told that to get FIPS 140-2 validation they would have to use the default P and Q. OpenSSL chose to implement Dual_EC_DRBG despite its dubious reputation for completeness, noting that OpenSSL tried to be complete and implements many other insecure algorithms. OpenSSL did not use Dual_EC_DRBG as the default CSPRNG, and it was discovered in 2013 that a bug made the OpenSSL implementation of Dual_EC_DRBG non-functioning, meaning that no one could have been using it.[37] Bruce Schneier reported in December 2007 that Microsoft added Dual_EC_DRBG support to Windows Vista, though not enabled by default, and Schneier warned against the known potential backdoor.[43] Windows 10 and later will silently replace calls to Dual_EC_DRBG with calls to CTR_DRBG based on AES.[44] On September 9, 2013, following the Snowden leak, and the New York Times report on the backdoor in Dual_EC_DRBG, the National Institute of Standards and Technology (NIST) ITL announced that in light of community security concerns, it was reissuing SP 800-90A as draft standard, and re-opening SP800-90B/C for public comment. NIST now "strongly recommends" against the use of Dual_EC_DRBG, as specified in the January 2012 version of SP 800-90A.[45][46] The discovery of a backdoor in a NIST standard has been a major embarrassment for the NIST.[47] RSA Security had kept Dual_EC_DRBG as the default CSPRNG in BSAFE even after the wider cryptographic community became aware of the potential backdoor in 2007, but there does not seem to have been a general awareness of BSAFE's usage of Dual_EC_DRBG as a user option in the community. Only after widespread concern about the backdoor was there an effort to find software which used Dual_EC_DRBG, of which BSAFE was by far the most prominent found. After the 2013 revelations, RSA security Chief of Technology Sam Curry provided Ars Technica with a rationale for originally choosing the flawed Dual EC DRBG standard as default over the alternative random number generators.[48] The technical accuracy of the statement was widely criticized by cryptographers, including Matthew Green and Matt Blaze.[28] On December 20, 2013, it was reported by Reuters that RSA had accepted a secret payment of$$ 10 million from the NSA to set the random number generator as the default in two of its encryption products.[2][49] On December 22, 2013, RSA posted a statement to its corporate blog "categorically" denying a secret deal with the NSA to insert a "known flawed random number generator" into its BSA...
$$, I was 1) driven like all get up and typically traded stocks at 4am, followed by a 10-12 hour work day during weekdays, and there was never a day when i did not work, even if that meant simply doing patient rounds on Sunday. My patients thought it was fabulous that they would see the same guy every day of the week. Nursing staff thought it was way hip, too, as thy never were put in aposition of guessing or consulting another doc. Heck i developed grandiose deluisions i was the MAN. But I was so busy making$$

...ption that arises periodically and many times recently across a number of threads is that we are all driven by motives such as wealth or power. Now I have no personal experience about being really wealthy. My income has varied between nothing and 350K/yr. The odd thing is that when I made the most $$, I was 1) driven like all get up and typically traded stocks at 4am, followed by a 10-12 hour work day during weekdays, and there was never a day when i did not work, even if that meant simply doing patient rounds on Sunday. My patients thought it was fabulous that they would see the same guy every day of the week. Nursing staff thought it was way hip, too, as thy never were put in aposition of guessing or consulting another doc. Heck i developed grandiose deluisions i was the MAN. But I was so busy making$$ and consumed with investing same that I had no life. I was completely uninformed (Russ W et al might argue nothing new there) but it was an empty materialistic fantasy world, ungrounded by anything of real importance/value. Likely I am still not a huge contributor to the betterment of mankind, but...
$$3 billion in losses. Ford's newly appointed Corporate Quality Director, Larry Moore, was charged with recruiting Deming to help jump-start a quality movement at Ford.[25] Deming questioned the company's culture and the way its managers operated. To Ford's surprise, Deming talked not about quality, but about management. He told Ford that management actions were responsible for 85of all problems in developing better cars. In 1986, Ford came out with a profitable line of cars, the Taurus-Sable line. In a letter to Autoweek, Donald Petersen, then Ford chairman, said, "We are moving toward building a quality culture at Ford and the many changes that have been taking place here have their roots directly in Deming's teachings."[26] By 1986, Ford had become the most profitable American auto company. For the first time since the 1920s, its earnings had exceeded those of archrival General Motors (GM). Ford had come to lead the American automobile industry in improvements. Ford's following years' earnings confirmed that its success was not a fluke, for its earnings continued to exceed GM and Chrysler's. In 1982, Deming's book Quality, Productivity, and Competitive Position was published by the MIT Center for Advanced Engineering, and was renamed Out of the Crisis in 1986. In it, he offers a theory of management based on his famous 14 Points for Management. Management's failure to plan for the future brings about loss of market, which brings about loss of jobs. Management must be judged not only by the quarterly dividend, but also by innovative plans to stay in business, protect investment, ensure future dividends, and provide more jobs through improved products and services. "Long-term commitment to new learning and new philosophy is required of any management that seeks transformation. The timid and the fainthearted, and the people that expect quick results, are doomed to disappointment." In 1982, Deming, along with Paul Hertz and Howard Gitlow of the University of Miami Graduate School of Business in Coral Gables, founded the W. Edwards Deming Institute for the Improvement of Productivity and Quality. In 1983, the institute trained consultants of Ernst and Whinney Management Consultants in the Deming teachings. E&W then founded its Deming Quality Consulting Practice which is still active today. His methods and workshops regarding Total Quality Management have had broad influence. For example, they were used to define how the U.S. Environmental Protection Agency's Underground Storage Tanks program would work.[27] Over the course of his career, Deming received dozens of academic awards, including another, honorary, PhD from Oregon State University. In 1987, he was awarded the National Medal of Technology: "For his forceful promotion of statistical methodology, for his contributions to sampling theory, and for his advocacy to corporations and nations of a general management philosophy that has resulted in improved product quality." In 1988, he received the Distinguished Career in Science award from the National Academy of Sciences.[13] Deming and his staff continued to advise businesses large and small. From 1985 through 1989, Deming served as a consultant to Vernay Laboratories, a rubber manufacturing firm in Yellow Springs, Ohio, with fewer than 1,000 employees. He held several week-long seminars for employees and suppliers of the small company where his infamous example "Workers on the Red Beads" spurred several major changes in Vernay's manufacturing processes. Deming joined the Graduate School of Business at Columbia University in 1988. In 1990, during his last year, he founded the W. Edwards Deming Center for Quality, Productivity, and Competitiveness at Columbia Business School to promote operational excellence in business through the development of research, best practices and strategic planning. In 1990, Marshall Industries (NYSE:MI, 1984–1999) CEO Robert Rodin trained with the then 90-year-old Deming and his colleague Nida Backaitis. Marshall Industries' dramatic transformation and growth from$$

## W. Edwards Deming

https://en.wikipedia.org/wiki/W._Edwards_Deming
...services increased dramatically, and Deming continued consulting for industry throughout the world until his death at the age of 93. Ford Motor Company was one of the first American corporations to seek help from Deming. In 1981, Ford's sales were falling. Between 1979 and 1982, Ford had incurred $$3 billion in losses. Ford's newly appointed Corporate Quality Director, Larry Moore, was charged with recruiting Deming to help jump-start a quality movement at Ford.[25] Deming questioned the company's culture and the way its managers operated. To Ford's surprise, Deming talked not about quality, but about management. He told Ford that management actions were responsible for 85of all problems in developing better cars. In 1986, Ford came out with a profitable line of cars, the Taurus-Sable line. In a letter to Autoweek, Donald Petersen, then Ford chairman, said, "We are moving toward building a quality culture at Ford and the many changes that have been taking place here have their roots directly in Deming's teachings."[26] By 1986, Ford had become the most profitable American auto company. For the first time since the 1920s, its earnings had exceeded those of archrival General Motors (GM). Ford had come to lead the American automobile industry in improvements. Ford's following years' earnings confirmed that its success was not a fluke, for its earnings continued to exceed GM and Chrysler's. In 1982, Deming's book Quality, Productivity, and Competitive Position was published by the MIT Center for Advanced Engineering, and was renamed Out of the Crisis in 1986. In it, he offers a theory of management based on his famous 14 Points for Management. Management's failure to plan for the future brings about loss of market, which brings about loss of jobs. Management must be judged not only by the quarterly dividend, but also by innovative plans to stay in business, protect investment, ensure future dividends, and provide more jobs through improved products and services. "Long-term commitment to new learning and new philosophy is required of any management that seeks transformation. The timid and the fainthearted, and the people that expect quick results, are doomed to disappointment." In 1982, Deming, along with Paul Hertz and Howard Gitlow of the University of Miami Graduate School of Business in Coral Gables, founded the W. Edwards Deming Institute for the Improvement of Productivity and Quality. In 1983, the institute trained consultants of Ernst and Whinney Management Consultants in the Deming teachings. E&W then founded its Deming Quality Consulting Practice which is still active today. His methods and workshops regarding Total Quality Management have had broad influence. For example, they were used to define how the U.S. Environmental Protection Agency's Underground Storage Tanks program would work.[27] Over the course of his career, Deming received dozens of academic awards, including another, honorary, PhD from Oregon State University. In 1987, he was awarded the National Medal of Technology: "For his forceful promotion of statistical methodology, for his contributions to sampling theory, and for his advocacy to corporations and nations of a general management philosophy that has resulted in improved product quality." In 1988, he received the Distinguished Career in Science award from the National Academy of Sciences.[13] Deming and his staff continued to advise businesses large and small. From 1985 through 1989, Deming served as a consultant to Vernay Laboratories, a rubber manufacturing firm in Yellow Springs, Ohio, with fewer than 1,000 employees. He held several week-long seminars for employees and suppliers of the small company where his infamous example "Workers on the Red Beads" spurred several major changes in Vernay's manufacturing processes. Deming joined the Graduate School of Business at Columbia University in 1988. In 1990, during his last year, he founded the W. Edwards Deming Center for Quality, Productivity, and Competitiveness at Columbia Business School to promote operational excellence in business through the development of research, best practices and strategic planning. In 1990, Marshall Industries (NYSE:MI, 1984–1999) CEO Robert Rodin trained with the then 90-year-old Deming and his colleague Nida Backaitis. Marshall Industries' dramatic transformation and growth from$$ 400 million to 1.8 billion in sales was chronicled in Deming's last book The New Economics, a Harvard Case Study, and Rodin's book, Free, Perfect and Now. In 1993, Deming published his final book, The New Economics for Industry, Government, Education, which included the System of Profound Knowled...
$$25,000. In June 1968, the Baltimore Board of Recreation and Parks approved the location of the statue to be placed at Battle Monument Square. On October 3 of that year, the Baltimore Art Commission also approved of the statue's creation.[22] The choice of location incited a major controversy on the Baltimore political scene. Harry D. Kaufman, a member of the Park Board,[23] criticized the fact that the statue was going to be of an unidentified black male, arguing that it was a tribute to a race as opposed to an individual. He suggested that the statue pay tribute to Crispus Attucks, Harriet Tubman, or Doris Miller instead.[24] Additional arguments were also raised by the General Society of the War of 1812, the Constellation Committee, and the Star-Spangled Banner Flag House.[25] Some concerns were raised about the location of the statue, which was a plaza dedicated to the fallen soldiers of Fort McHenry, which some believed would change the scope and meaning of the site.[23] The work was also criticized for its choice to dress the soldier in modern clothing. Despite the criticism, Lewis refused to meet with his opponents to discuss any changes to the statue or location.[26] The work was completed by famed New York foundry, Roman Bronze Works,[27] in December 1971.[25] Lewis had requested that the city pay for the work's pedestal,[24] to be made of brick and marble[25] and costing around$$

## James E. Lewis

https://en.wikipedia.org/wiki/James_E._Lewis
... $$25,000. In June 1968, the Baltimore Board of Recreation and Parks approved the location of the statue to be placed at Battle Monument Square. On October 3 of that year, the Baltimore Art Commission also approved of the statue's creation.[22] The choice of location incited a major controversy on the Baltimore political scene. Harry D. Kaufman, a member of the Park Board,[23] criticized the fact that the statue was going to be of an unidentified black male, arguing that it was a tribute to a race as opposed to an individual. He suggested that the statue pay tribute to Crispus Attucks, Harriet Tubman, or Doris Miller instead.[24] Additional arguments were also raised by the General Society of the War of 1812, the Constellation Committee, and the Star-Spangled Banner Flag House.[25] Some concerns were raised about the location of the statue, which was a plaza dedicated to the fallen soldiers of Fort McHenry, which some believed would change the scope and meaning of the site.[23] The work was also criticized for its choice to dress the soldier in modern clothing. Despite the criticism, Lewis refused to meet with his opponents to discuss any changes to the statue or location.[26] The work was completed by famed New York foundry, Roman Bronze Works,[27] in December 1971.[25] Lewis had requested that the city pay for the work's pedestal,[24] to be made of brick and marble[25] and costing around$$ 500,[18] though the city did not approve this.[24] The final cost of the work was about 30,000.[9] The statue was erected in Battle Monument Square on May 30, 1972 and was covered in a black fabric. Weeks before the official unveiling of the statue, a vandal destroyed the fabric and exposed the...
$$\sum \vec{F} = m \vec{a} The Attempt at a Solution Ok, reposting here as this looks too much like a h/w problem, although it's not :D I managed to solve the problem first isolating the triangular block and finding the horizontal acceleration. It was crucial to use the constraint of motion along the ground only. Likewise, When I go to the non inertial frame of reference of the accelerating block to analyze the FBD of the smaller block, I use the constraint of motion along the incline. From there, I can find the magnitude of the normal force between the blocks BUT also using the inertial *fictitious* force that needs to be considered for using a non inertial frame of reference. THIS IS WHAT BOTHERS ME. Calling "n" the normal force between the blocks, and "[itex]n_g$$

## How to solve this without resorting to inertial forces

...upon a horizontal force F, both will accelerate (I can, without loss of generality, assume the big block accelerates to the right). I just wanted to find the acceleration of each block with respect to the ground, which is frictionless and so is the interface between the blocks. Homework Equations $$\sum \vec{F} = m \vec{a} The Attempt at a Solution Ok, reposting here as this looks too much like a h/w problem, although it's not :D I managed to solve the problem first isolating the triangular block and finding the horizontal acceleration. It was crucial to use the constraint of motion along the ground only. Likewise, When I go to the non inertial frame of reference of the accelerating block to analyze the FBD of the smaller block, I use the constraint of motion along the incline. From there, I can find the magnitude of the normal force between the blocks BUT also using the inertial *fictitious* force that needs to be considered for using a non inertial frame of reference. THIS IS WHAT BOTHERS ME. Calling "n" the normal force between the blocks, and "[itex]n_g$$ the normal force between the ground and the big block, the FBD of the triangular block gives me $$\sum F_x = F-n Sin(\theta) = M a_M$$ $$\sum F_y = n_g-Mg-n Cos(\theta) = 0$$ The FBD of the small block *once I hop onto the non inertial frame of reference of the accelerating incline* gives me *see figure* $$\sum F_x = mg Sin(\theta) - F_{inert}Cos(\theta) = m a_m$$...
$$17 billion in savings to Six Sigma.[5] Other early adopters of Six Sigma include Honeywell and General Electric, where Jack Welch introduced the method.[6] By the late 1990s, about two-thirds of the Fortune 500 organizations had begun Six Sigma initiatives with the aim of reducing costs and improving quality.[7] In , some practitioners have combined Six Sigma ideas with lean manufacturing to create a methodology named Lean Six Sigma.[8] The Lean Six Sigma methodology views lean manufacturing, which addresses process flow and waste issues, and Six Sigma, with its focus on variation and design, as complementary disciplines aimed at promoting "business and operational excellence".[8] In 2011, the International Organization for Standardization (ISO) has published the first standard "ISO 13053:2011" defining a Six Sigma process.[9] Other standards have been created mostly by universities or companies that have first-party certification programs for Six Sigma. Difference from lean management Lean management and Six Sigma are two concepts which share similar methodologies and tools. Both programs are Japanese-influenced, but they are two different programs. Lean management is focused on eliminating waste using a set of proven standardized tools and methodologies that target organizational efficiencies while integrating a performance improvement system utilized by everyone, while Six Sigma's focus is on eliminating defects and reducing variation. Both systems are driven by data, though Six Sigma is much more dependent on accurate data. Methodologies Six Sigma projects follow two project methodologies inspired by Deming's Plan–Do–Study–Act Cycle. These methodologies, composed of five phases each, bear the acronyms DMAIC and DMADV.[7] DMAIC ("duh-may-ick", ) is used for projects aimed at improving an existing business process.[7] DMADV ("duh-mad-vee", ) is used for projects aimed at creating new product or process designs.[7] DMAIC The five steps of DMAIC The DMAIC project methodology has five phases: Define the system, the voice of the customer and their requirements, and the project goals, specifically. Measure key aspects of the current process and collect relevant data; calculate the 'as-is' Process Capability. Analyze the data to investigate and verify cause-and-effect relationships. Determine what the relationships are, and attempt to ensure that all factors have been considered. Seek out root cause of the defect under investigation. Improve or optimize the current process based upon data analysis using techniques such as design of experiments, poka yoke or mistake proofing, and standard work to create a new, future state process. Set up pilot runs to establish process capability. Control the future state process to ensure that any deviations from the target are corrected before they result in defects. Implement control systems such as statistical process control, production boards, visual workplaces, and continuously monitor the process. This process is repeated until the desired quality level is obtained. Some organizations add a Recognize step at the beginning, which is to recognize the right problem to work on, thus yielding an RDMAIC methodology.[10] DMADV or DFSS The five steps of DMADV The DMADV project methodology, known as DFSS ("Design For Six Sigma"),[7] features five phases: Define design goals that are consistent with customer demands and the enterprise strategy. Measure and identify CTQs (characteristics that are Critical To Quality), measure product capabilities, production process capability, and measure risks. Analyze to develop and design alternatives Design an improved alternative, best suited per analysis in the previous step Verify the design, set up pilot runs, implement the production process and hand it over to the process owner(s). Quality management tools and methods Within the individual phases of a DMAIC or DMADV project, Six Sigma utilizes many established quality-management tools that are also used outside Six Sigma. The following table shows an overview of the main methods used. Implementation roles One key innovation of Six Sigma involves the absolute "professionalizing" of quality management functions. Prior to Six Sigma, quality management in practice was largely relegated to the production floor and to statisticians in a separate quality department. Formal Six Sigma programs adopt a kind of elite ranking terminology (similar to some martial arts systems, like judo) to define a hierarchy (and special career path) that includes all business functions and levels. Six Sigma identifies several key roles for its successful implementation.[11] Executive Leadership includes the CEO and other members of top management. They are responsible for setting up a vision for Six Sigma implementation. They also empower the other role holders with the freedom and resources to explore new ideas for breakthrough improvements by transcending departmental barriers and overcoming inherent resistance to change.[12] Champions take responsibility for Six Sigma implementation across the organization in an integrated manner. The Executive Leadership draws them from upper management. Champions also act as mentors to Black Belts. Master Black Belts, identified by Champions, act as in-house coaches on Six Sigma. They devote 100of their time to Six Sigma. They assist Champions and guide Black Belts and Green Belts. Apart from statistical tasks, they spend their time on ensuring consistent application of Six Sigma across various functions and departments. Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific projects. They also devote 100of their time to Six Sigma. They primarily focus on Six Sigma project execution and special leadership with special tasks, whereas Champions and Master Black Belts focus on identifying projects/functions for Six Sigma. Green Belts are the employees who take up Six Sigma implementation along with their other job responsibilities, operating under the guidance of Black Belts. According to proponents of the system, special training is needed[13] for all of these practitioners to ensure that they follow the methodology and use the data-driven approach correctly. Some organizations use additional belt colours, such as Yellow Belts, for employees that have basic training in Six Sigma tools and generally participate in projects and "White belts" for those locally trained in the concepts but do not participate in the project team. "Orange belts" are also mentioned to be used for special cases.[14] Certification General Electric and Motorola developed certification programs as part of their Six Sigma implementation, verifying individuals' command of the Six Sigma methods at the relevant skill level (Green Belt, Black Belt etc.). Following this approach, many organizations in the 1990s started offering Six Sigma certifications to their employees. In 2008 Motorola University later co-developed with Vative and the Lean Six Sigma Society of Professionals a set of comparable certification standards for Lean Certification.[7][15] Criteria for Green Belt and Black Belt certification vary; some companies simply require participation in a course and a Six Sigma project.[15] There is no standard certification body, and different certification services are offered by various quality associations and other providers against a fee.[16][17] The American Society for Quality for example requires Black Belt applicants to pass a written exam and to provide a signed affidavit stating that they have completed two projects or one project combined with three years' practical experience in the body of knowledge.[15][18] Etymology of "six sigma process" The term "six sigma process" comes from the notion that if one has six standard deviations between the process mean and the nearest specification limit, as shown in the graph, practically no items will fail to meet specifications.[3] This is based on the calculation method employed in process capability studies. Capability studies measure the number of standard deviations between the process mean and the nearest specification limit in sigma units, represented by the Greek letter \sigma (sigma). As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, fewer standard deviations will fit between the mean and the nearest specification limit, decreasing the sigma number and increasing the likelihood of items outside specification. One should also note that calculation of Sigma levels for a process data is independent of the data being normally distributed. In one of the criticisms to Six Sigma, practitioners using this approach spend a lot of time transforming data from non-normal to normal using transformation techniques. It must be said that Sigma levels can be determined for process data that has evidence of non-normality.[3] Role of the 1.5 sigma shift Experience has shown that processes usually do not perform as well in the long term as they do in the short term.[3] As a result, the number of sigmas that will fit between the process mean and the nearest specification limit may well drop over time, compared to an initial short-term study.[3] To account for this real-life increase in process variation over time, an empirically based 1.5 sigma shift is introduced into the calculation.[3][19] According to this idea, a process that fits 6 sigma between the process mean and the nearest specification limit in a short-term study will in the long term fit only 4.5 sigma – either because the process mean will move over time, or because the long-term standard deviation of the process will be greater than that observed in the short term, or both.[3] Hence the widely accepted definition of a six sigma process is a process that produces 3.4 defective parts per million opportunities (DPMO). This is based on the fact that a process that is normally distributed will have 3.4 parts per million outside the limits, when the limits are six sigma from the "original" mean of zero and the process mean is then shifted by 1.5 sigma (and therefore, the six sigma limits are no longer symmetrical about the mean).[3] The former six sigma distribution, when under the effect of the 1.5 sigma shift, is commonly referred to as a 4.5 sigma process. The failure rate of a six sigma distribution with the mean shifted 1.5 sigma is not equivalent to the failure rate of a 4.5 sigma process with the mean centered on zero.[3] This allows for the fact that special causes may result in a deterioration in process performance over time and is designed to prevent underestimation of the defect levels likely to be encountered in real-life operation.[3] The role of the sigma shift is mainly academic. The purpose of six sigma is to generate organizational performance improvement. It is up to the organization to determine, based on customer expectations, what the appropriate sigma level of a process is. The purpose of the sigma value is as a comparative figure to determine whether a process is improving, deteriorating, stagnant or non-competitive with others in the same business. Six sigma (3.4 DPMO) is not the goal of all processes. Sigma levels A control chart depicting a process that experienced a 1.5 sigma drift in the process mean toward the upper specification limit starting at midnight. Control charts are used to maintain 6 sigma quality by signaling when quality professionals should investigate a process to find and eliminate special-cause variation. The table below gives long-term DPMO values corresponding to various short-term sigma levels.[20][21] These figures assume that the process mean will shift by 1.5 sigma toward the side with the critical specification limit. In other words, they assume that after the initial study determining the short-term sigma level, the long-term Cpk value will turn out to be 0.5 less than the short-term Cpk value. So, now for example, the DPMO figure given for 1 sigma assumes that the long-term process mean will be 0.5 sigma beyond the specification limit (Cpk = –0.17), rather than 1 sigma within it, as it was in the short-term study (Cpk = 0.33). Note that the defect percentages indicate only defects exceeding the specification limit to which the process mean is nearest. Defects beyond the far specification limit are not included in the percentages. The formula used here to calculate the DPMO is thus DPMO = 1,000,000 \centerdot (1 - \phi(level - 1.5)) Sigma level Sigma (with 1.5 \sigma shift) DPMO Percent defective Percentage yield Short-term Cpk Long-term Cpk 1 - 0.5 691,462 69310.33 - 0.17 2 0.5 308,538 31690.67 0.17 3 1.5 66,807 6.793.31.00 0.5 4 2.5 6,210 0.6299.381.33 0.83 5 3.5 233 0.02399.9771.67 1.17 6 4.5 3.4 0.0003499.999662.00 1.5 7 5.5 0.019 0.000001999.99999812.33 1.83 Software Application Six Sigma mostly finds application in large organizations.[22] An important factor in the spread of Six Sigma was GE's 1998 announcement of$$

## Six Sigma

https://en.wikipedia.org/wiki/Six_Sigma
...ed to determine an appropriate sigma level for each of their most important processes and strive to achieve these. As a result of this goal, it is incumbent on management of the organization to prioritize areas of improvement. "" was registered June 11, 1991 as . In 2005 Motorola attributed over US $$17 billion in savings to Six Sigma.[5] Other early adopters of Six Sigma include Honeywell and General Electric, where Jack Welch introduced the method.[6] By the late 1990s, about two-thirds of the Fortune 500 organizations had begun Six Sigma initiatives with the aim of reducing costs and improving quality.[7] In , some practitioners have combined Six Sigma ideas with lean manufacturing to create a methodology named Lean Six Sigma.[8] The Lean Six Sigma methodology views lean manufacturing, which addresses process flow and waste issues, and Six Sigma, with its focus on variation and design, as complementary disciplines aimed at promoting "business and operational excellence".[8] In 2011, the International Organization for Standardization (ISO) has published the first standard "ISO 13053:2011" defining a Six Sigma process.[9] Other standards have been created mostly by universities or companies that have first-party certification programs for Six Sigma. Difference from lean management Lean management and Six Sigma are two concepts which share similar methodologies and tools. Both programs are Japanese-influenced, but they are two different programs. Lean management is focused on eliminating waste using a set of proven standardized tools and methodologies that target organizational efficiencies while integrating a performance improvement system utilized by everyone, while Six Sigma's focus is on eliminating defects and reducing variation. Both systems are driven by data, though Six Sigma is much more dependent on accurate data. Methodologies Six Sigma projects follow two project methodologies inspired by Deming's Plan–Do–Study–Act Cycle. These methodologies, composed of five phases each, bear the acronyms DMAIC and DMADV.[7] DMAIC ("duh-may-ick", ) is used for projects aimed at improving an existing business process.[7] DMADV ("duh-mad-vee", ) is used for projects aimed at creating new product or process designs.[7] DMAIC The five steps of DMAIC The DMAIC project methodology has five phases: Define the system, the voice of the customer and their requirements, and the project goals, specifically. Measure key aspects of the current process and collect relevant data; calculate the 'as-is' Process Capability. Analyze the data to investigate and verify cause-and-effect relationships. Determine what the relationships are, and attempt to ensure that all factors have been considered. Seek out root cause of the defect under investigation. Improve or optimize the current process based upon data analysis using techniques such as design of experiments, poka yoke or mistake proofing, and standard work to create a new, future state process. Set up pilot runs to establish process capability. Control the future state process to ensure that any deviations from the target are corrected before they result in defects. Implement control systems such as statistical process control, production boards, visual workplaces, and continuously monitor the process. This process is repeated until the desired quality level is obtained. Some organizations add a Recognize step at the beginning, which is to recognize the right problem to work on, thus yielding an RDMAIC methodology.[10] DMADV or DFSS The five steps of DMADV The DMADV project methodology, known as DFSS ("Design For Six Sigma"),[7] features five phases: Define design goals that are consistent with customer demands and the enterprise strategy. Measure and identify CTQs (characteristics that are Critical To Quality), measure product capabilities, production process capability, and measure risks. Analyze to develop and design alternatives Design an improved alternative, best suited per analysis in the previous step Verify the design, set up pilot runs, implement the production process and hand it over to the process owner(s). Quality management tools and methods Within the individual phases of a DMAIC or DMADV project, Six Sigma utilizes many established quality-management tools that are also used outside Six Sigma. The following table shows an overview of the main methods used. Implementation roles One key innovation of Six Sigma involves the absolute "professionalizing" of quality management functions. Prior to Six Sigma, quality management in practice was largely relegated to the production floor and to statisticians in a separate quality department. Formal Six Sigma programs adopt a kind of elite ranking terminology (similar to some martial arts systems, like judo) to define a hierarchy (and special career path) that includes all business functions and levels. Six Sigma identifies several key roles for its successful implementation.[11] Executive Leadership includes the CEO and other members of top management. They are responsible for setting up a vision for Six Sigma implementation. They also empower the other role holders with the freedom and resources to explore new ideas for breakthrough improvements by transcending departmental barriers and overcoming inherent resistance to change.[12] Champions take responsibility for Six Sigma implementation across the organization in an integrated manner. The Executive Leadership draws them from upper management. Champions also act as mentors to Black Belts. Master Black Belts, identified by Champions, act as in-house coaches on Six Sigma. They devote 100of their time to Six Sigma. They assist Champions and guide Black Belts and Green Belts. Apart from statistical tasks, they spend their time on ensuring consistent application of Six Sigma across various functions and departments. Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific projects. They also devote 100of their time to Six Sigma. They primarily focus on Six Sigma project execution and special leadership with special tasks, whereas Champions and Master Black Belts focus on identifying projects/functions for Six Sigma. Green Belts are the employees who take up Six Sigma implementation along with their other job responsibilities, operating under the guidance of Black Belts. According to proponents of the system, special training is needed[13] for all of these practitioners to ensure that they follow the methodology and use the data-driven approach correctly. Some organizations use additional belt colours, such as Yellow Belts, for employees that have basic training in Six Sigma tools and generally participate in projects and "White belts" for those locally trained in the concepts but do not participate in the project team. "Orange belts" are also mentioned to be used for special cases.[14] Certification General Electric and Motorola developed certification programs as part of their Six Sigma implementation, verifying individuals' command of the Six Sigma methods at the relevant skill level (Green Belt, Black Belt etc.). Following this approach, many organizations in the 1990s started offering Six Sigma certifications to their employees. In 2008 Motorola University later co-developed with Vative and the Lean Six Sigma Society of Professionals a set of comparable certification standards for Lean Certification.[7][15] Criteria for Green Belt and Black Belt certification vary; some companies simply require participation in a course and a Six Sigma project.[15] There is no standard certification body, and different certification services are offered by various quality associations and other providers against a fee.[16][17] The American Society for Quality for example requires Black Belt applicants to pass a written exam and to provide a signed affidavit stating that they have completed two projects or one project combined with three years' practical experience in the body of knowledge.[15][18] Etymology of "six sigma process" The term "six sigma process" comes from the notion that if one has six standard deviations between the process mean and the nearest specification limit, as shown in the graph, practically no items will fail to meet specifications.[3] This is based on the calculation method employed in process capability studies. Capability studies measure the number of standard deviations between the process mean and the nearest specification limit in sigma units, represented by the Greek letter \sigma (sigma). As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, fewer standard deviations will fit between the mean and the nearest specification limit, decreasing the sigma number and increasing the likelihood of items outside specification. One should also note that calculation of Sigma levels for a process data is independent of the data being normally distributed. In one of the criticisms to Six Sigma, practitioners using this approach spend a lot of time transforming data from non-normal to normal using transformation techniques. It must be said that Sigma levels can be determined for process data that has evidence of non-normality.[3] Role of the 1.5 sigma shift Experience has shown that processes usually do not perform as well in the long term as they do in the short term.[3] As a result, the number of sigmas that will fit between the process mean and the nearest specification limit may well drop over time, compared to an initial short-term study.[3] To account for this real-life increase in process variation over time, an empirically based 1.5 sigma shift is introduced into the calculation.[3][19] According to this idea, a process that fits 6 sigma between the process mean and the nearest specification limit in a short-term study will in the long term fit only 4.5 sigma – either because the process mean will move over time, or because the long-term standard deviation of the process will be greater than that observed in the short term, or both.[3] Hence the widely accepted definition of a six sigma process is a process that produces 3.4 defective parts per million opportunities (DPMO). This is based on the fact that a process that is normally distributed will have 3.4 parts per million outside the limits, when the limits are six sigma from the "original" mean of zero and the process mean is then shifted by 1.5 sigma (and therefore, the six sigma limits are no longer symmetrical about the mean).[3] The former six sigma distribution, when under the effect of the 1.5 sigma shift, is commonly referred to as a 4.5 sigma process. The failure rate of a six sigma distribution with the mean shifted 1.5 sigma is not equivalent to the failure rate of a 4.5 sigma process with the mean centered on zero.[3] This allows for the fact that special causes may result in a deterioration in process performance over time and is designed to prevent underestimation of the defect levels likely to be encountered in real-life operation.[3] The role of the sigma shift is mainly academic. The purpose of six sigma is to generate organizational performance improvement. It is up to the organization to determine, based on customer expectations, what the appropriate sigma level of a process is. The purpose of the sigma value is as a comparative figure to determine whether a process is improving, deteriorating, stagnant or non-competitive with others in the same business. Six sigma (3.4 DPMO) is not the goal of all processes. Sigma levels A control chart depicting a process that experienced a 1.5 sigma drift in the process mean toward the upper specification limit starting at midnight. Control charts are used to maintain 6 sigma quality by signaling when quality professionals should investigate a process to find and eliminate special-cause variation. The table below gives long-term DPMO values corresponding to various short-term sigma levels.[20][21] These figures assume that the process mean will shift by 1.5 sigma toward the side with the critical specification limit. In other words, they assume that after the initial study determining the short-term sigma level, the long-term Cpk value will turn out to be 0.5 less than the short-term Cpk value. So, now for example, the DPMO figure given for 1 sigma assumes that the long-term process mean will be 0.5 sigma beyond the specification limit (Cpk = –0.17), rather than 1 sigma within it, as it was in the short-term study (Cpk = 0.33). Note that the defect percentages indicate only defects exceeding the specification limit to which the process mean is nearest. Defects beyond the far specification limit are not included in the percentages. The formula used here to calculate the DPMO is thus DPMO = 1,000,000 \centerdot (1 - \phi(level - 1.5)) Sigma level Sigma (with 1.5 \sigma shift) DPMO Percent defective Percentage yield Short-term Cpk Long-term Cpk 1 - 0.5 691,462 69310.33 - 0.17 2 0.5 308,538 31690.67 0.17 3 1.5 66,807 6.793.31.00 0.5 4 2.5 6,210 0.6299.381.33 0.83 5 3.5 233 0.02399.9771.67 1.17 6 4.5 3.4 0.0003499.999662.00 1.5 7 5.5 0.019 0.000001999.99999812.33 1.83 Software Application Six Sigma mostly finds application in large organizations.[22] An important factor in the spread of Six Sigma was GE's 1998 announcement of$$ 350 million in savings thanks to , a figure that later grew to more than $$1 billion.[22] According to industry consultants like Thomas Pyzdek and John Kullmann, companies with fewer than 500 employees are less suited to Six Sigma implementation or need to adapt the standard approach to make it work for them.[22] Six Sigma however contains a large number of tools and techniques that work well in small to mid-size organizations. The fact that an organization is not big enough to be able to afford black belts does not diminish its abilities to make improvements using this set of tools and techniques. The infrastructure described as necessary to support Six Sigma is a result of the size of the organization rather than a requirement of Six Sigma itself.[22] Although the scope of Six Sigma differs depending on where it is implemented, it can successfully deliver its benefits to different applications. Manufacturing After its first application at Motorola in the late 1980s, other internationally recognized firms currently recorded high number of savings after applying Six Sigma. Examples of these are Johnson and Johnson, with$$ 600 million of reported savings, Texas Instruments, which saved over...
$$100 Dream House. Mattel announced that it would redesign the house in the future to accommodate the doll.[50][51] In 2010, Barbie has also been criticized for a children's book called Barbie: I Can Be A Computer Engineer, which portrayed Barbie as a game designer who was not technically sophisticated and needed boys' help to do game programming. The company then promptly responded to criticism on gender role stereotypes by redesigning a "Computer Engineer Barbie" who was a game programmer rather than designer.[52] Bad influence concerns In July 1992, Mattel released Teen Talk Barbie, which spoke a number of phrases including "Will we ever have enough clothes?", "I love shopping!", and "Wanna have a pizza party?" Each doll was programmed to say four out of 270 possible phrases, so that no two given dolls were likely to be the same (the number of possible combinations is 270!/(266!4!) = 216,546,345). One of these 270 phrases was "Math class is tough!". Although only about 1.5of all the dolls sold said the phrase, it led to criticism from the American Association of University Women. In October 1992, Mattel announced that Teen Talk Barbie would no longer say the phrase, and offered a swap to anyone who owned a doll that did.[53] In 2002, Mattel introduced a line of pregnant Midge (and baby) dolls, but this Happy Family line was quickly pulled from the market due to complaints that she promoted teen pregnancy, though by that time, Barbie's friend Midge was supposed to be a married adult.[54] In September 2003, the Middle Eastern country of Saudi Arabia outlawed the sale of Barbie dolls and franchises, stating that they did not conform to the ideals of Islam. The Committee for the Promotion of Virtue and the Prevention of Vice warned, "Jewish Barbie dolls, with their revealing clothes and shameful postures, accessories and tools are a symbol of decadence to the perverted West. Let us beware of her dangers and be careful."[55] The 2003 Saudi ban was temporary.[56] In Muslim-majority nations, there is an alternative doll called Fulla, which was introduced in November 2003 and is equivalent to Barbie, but is designed specifically to represent traditional Islamic values. Fulla is not manufactured by the Mattel Corporation (although Mattel still licenses Fulla dolls and franchises for sale in certain markets), and (as of January 2021) the "Jewish" Barbie brand is still available in other Muslim-majority countries including Egypt and Indonesia.[57] In Iran, the Sara and Dara dolls, which were introduced in March 2002, are available as an alternative to Barbie, even though they have not been as successful.[58] In November 2014, Mattel received criticism over the book I Can Be a Computer Engineer, which depicted Barbie as being inept at computers and requiring that her two male friends complete all of the necessary tasks to restore two laptops after she accidentally infects her and her sister's laptop with a malware-laced USB flash drive.[59] Critics complained that the book was sexist, as other books in the I Can Be... series depicted Barbie as someone who was competent in those jobs and did not require outside assistance from others.[60] Mattel later removed the book from sale on Amazon in response to the criticism.[61] Safety concerns In March 2000, stories appeared in the media claiming that the hard vinyl used in vintage Barbie dolls could leak toxic chemicals, causing danger to children playing with them. The claim was described as an overreaction by Joseph Prohaska, a professor at the University of Minnesota Duluth. A modern Barbie doll has a body made from ABS plastic, while the head is made from soft PVC.[62][63] In July 2010, Mattel released "Barbie Video Girl", a Barbie doll with a pinhole video camera in its chest, enabling clips of up to 30 minutes to be recorded, viewed, and uploaded to a computer via a USB cable. On November 30, 2010, the FBI issued a warning in a private memo that the doll could be used to produce child pornography, although it stated publicly that there was "no reported evidence that the doll had been used in any way other than intended."[64][65] In March 2015, concerns were raised about a version of the doll called "Hello Barbie", which can hold conversations with a child using speech recognition technology. The doll transmits data back to a service called ToyTalk, which according to Forbes, has a terms of service and privacy policy that allow it to “share audio recordings with third party vendors who assist us with speech recognition,” and states that “recordings and photos may also be used for research and development purposes, such as to improve speech recognition technology and artificial intelligence algorithms and create better entertainment experiences.”[66] Role model Barbies In March 2018, in time for International Women's Day, Mattel unveiled the "Barbie Celebrates Role Models" campaign with a line of 17 dolls, informally known as "sheroes", from diverse backgrounds "to showcase examples of extraordinary women".[67][68] Mattel developed this collection in response to mothers concerned about their daughters having positive female role models.[67] Dolls in this collection include Frida Kahlo, Patti Jenkins, Chloe Kim, Nicola Adams, Ibtihaj Muhammad, Bindi Irwin, Amelia Earhart, Misty Copeland, Helene Darroze, Katherine Johnson, Sara Gama, Martyna Wojciechowska, Gabby Douglas, Guan Xiaotong, Ava Duvernay, Yuan Yuan Tan, Iris Apfel, Ashley Graham and Leyla Piedayesh.[67] In 2020, the company announced a new release of "shero" dolls, including Paralympic champion Madison de Rozario.[69] Collecting Mattel estimates that there are well over 100,000 avid Barbie collectors. Ninety percent are women, at an average age of 40, purchasing more than twenty Barbie dolls each year. Forty-five percent of them spend upwards of$$

## Barbie

https://en.wikipedia.org/wiki/Barbie
...the unsold stock, making it sought after by collectors.[49] In May 1997, Mattel introduced Share a Smile Becky, a doll in a pink wheelchair. Kjersti Johnson, a 17-year-old high school student in Tacoma, Washington with cerebral palsy, pointed out that the doll would not fit into the elevator of 's $$100 Dream House. Mattel announced that it would redesign the house in the future to accommodate the doll.[50][51] In 2010, Barbie has also been criticized for a children's book called Barbie: I Can Be A Computer Engineer, which portrayed Barbie as a game designer who was not technically sophisticated and needed boys' help to do game programming. The company then promptly responded to criticism on gender role stereotypes by redesigning a "Computer Engineer Barbie" who was a game programmer rather than designer.[52] Bad influence concerns In July 1992, Mattel released Teen Talk Barbie, which spoke a number of phrases including "Will we ever have enough clothes?", "I love shopping!", and "Wanna have a pizza party?" Each doll was programmed to say four out of 270 possible phrases, so that no two given dolls were likely to be the same (the number of possible combinations is 270!/(266!4!) = 216,546,345 ). One of these 270 phrases was "Math class is tough!". Although only about 1.5of all the dolls sold said the phrase, it led to criticism from the American Association of University Women. In October 1992, Mattel announced that Teen Talk Barbie would no longer say the phrase, and offered a swap to anyone who owned a doll that did.[53] In 2002, Mattel introduced a line of pregnant Midge (and baby) dolls, but this Happy Family line was quickly pulled from the market due to complaints that she promoted teen pregnancy, though by that time, Barbie's friend Midge was supposed to be a married adult.[54] In September 2003, the Middle Eastern country of Saudi Arabia outlawed the sale of Barbie dolls and franchises, stating that they did not conform to the ideals of Islam. The Committee for the Promotion of Virtue and the Prevention of Vice warned, "Jewish Barbie dolls, with their revealing clothes and shameful postures, accessories and tools are a symbol of decadence to the perverted West. Let us beware of her dangers and be careful."[55] The 2003 Saudi ban was temporary.[56] In Muslim-majority nations, there is an alternative doll called Fulla, which was introduced in November 2003 and is equivalent to Barbie, but is designed specifically to represent traditional Islamic values. Fulla is not manufactured by the Mattel Corporation (although Mattel still licenses Fulla dolls and franchises for sale in certain markets), and (as of January 2021) the "Jewish" Barbie brand is still available in other Muslim-majority countries including Egypt and Indonesia.[57] In Iran, the Sara and Dara dolls, which were introduced in March 2002, are available as an alternative to Barbie, even though they have not been as successful.[58] In November 2014, Mattel received criticism over the book I Can Be a Computer Engineer, which depicted Barbie as being inept at computers and requiring that her two male friends complete all of the necessary tasks to restore two laptops after she accidentally infects her and her sister's laptop with a malware-laced USB flash drive.[59] Critics complained that the book was sexist, as other books in the I Can Be... series depicted Barbie as someone who was competent in those jobs and did not require outside assistance from others.[60] Mattel later removed the book from sale on Amazon in response to the criticism.[61] Safety concerns In March 2000, stories appeared in the media claiming that the hard vinyl used in vintage Barbie dolls could leak toxic chemicals, causing danger to children playing with them. The claim was described as an overreaction by Joseph Prohaska, a professor at the University of Minnesota Duluth. A modern Barbie doll has a body made from ABS plastic, while the head is made from soft PVC.[62][63] In July 2010, Mattel released "Barbie Video Girl", a Barbie doll with a pinhole video camera in its chest, enabling clips of up to 30 minutes to be recorded, viewed, and uploaded to a computer via a USB cable. On November 30, 2010, the FBI issued a warning in a private memo that the doll could be used to produce child pornography, although it stated publicly that there was "no reported evidence that the doll had been used in any way other than intended."[64][65] In March 2015, concerns were raised about a version of the doll called "Hello Barbie", which can hold conversations with a child using speech recognition technology. The doll transmits data back to a service called ToyTalk, which according to Forbes, has a terms of service and privacy policy that allow it to “share audio recordings with third party vendors who assist us with speech recognition,” and states that “recordings and photos may also be used for research and development purposes, such as to improve speech recognition technology and artificial intelligence algorithms and create better entertainment experiences.”[66] Role model Barbies In March 2018, in time for International Women's Day, Mattel unveiled the "Barbie Celebrates Role Models" campaign with a line of 17 dolls, informally known as "sheroes", from diverse backgrounds "to showcase examples of extraordinary women".[67][68] Mattel developed this collection in response to mothers concerned about their daughters having positive female role models.[67] Dolls in this collection include Frida Kahlo, Patti Jenkins, Chloe Kim, Nicola Adams, Ibtihaj Muhammad, Bindi Irwin, Amelia Earhart, Misty Copeland, Helene Darroze, Katherine Johnson, Sara Gama, Martyna Wojciechowska, Gabby Douglas, Guan Xiaotong, Ava Duvernay, Yuan Yuan Tan, Iris Apfel, Ashley Graham and Leyla Piedayesh.[67] In 2020, the company announced a new release of "shero" dolls, including Paralympic champion Madison de Rozario.[69] Collecting Mattel estimates that there are well over 100,000 avid Barbie collectors. Ninety percent are women, at an average age of 40, purchasing more than twenty Barbie dolls each year. Forty-five percent of them spend upwards of$$ 1000 a year. Vintage dolls from the early years are the most valuable at auction, and while the original was sold for $$3.00 in 1959, a mint boxed Barbie from 1959 sold for$$ 3552.50 on eBay in October 2004.[70] On September 26, 2006, a doll set a world record at auction of £9,000 sterling (US...
$$10). Marginal cost is not the cost of producing the "next" or "last" unit.[5] The cost of the last unit is the same as the cost of the first unit and every other unit. In the short run, increasing production requires using more of the variable input — conventionally assumed to be labor. Adding more labor to a fixed capital stock reduces the marginal product of labor because of the diminishing marginal returns. This reduction in productivity is not limited to the additional labor needed to produce the marginal unit – the productivity of every unit of labor is reduced. Thus the cost of producing the marginal unit of output has two components: the cost associated with producing the marginal unit and the increase in average costs for all units produced due to the "damage" to the entire productive process. The first component is the per-unit or average cost. The second component is the small increase in cost due to the law of diminishing marginal returns which increases the costs of all units sold. Marginal costs can also be expressed as the cost per unit of labor divided by the marginal product of labor.[6] Denoting variable cost as VC, the constant wage rate as w, and labor usage as L, we have MC = \frac{\Delta VC}{\Delta Q}  \Delta VC = {w \Delta L}  MC = \frac{w \Delta L}{\Delta Q}=\frac{w}{MPL}. Here MPL is the ratio of increase in the quantity produced per unit increase in labour: i.e. \Delta Q/ \Delta L, the marginal product of labor. The last equality holds because \frac{\Delta L}{\Delta Q}  is the change in quantity of labor that brings about a one-unit change in output.[7] Since the wage rate is assumed constant, marginal cost and marginal product of labor have an inverse relationship—if the marginal product of labor is decreasing (or, increasing), then marginal cost is increasing (decreasing), and AVC = VC/Q=wL/Q = w/(Q/L) = w/APL Empirical data on marginal cost While neoclassical models broadly assume that marginal cost will increase as production increases, several empirical studies conducted throughout the 20th century have concluded that the marginal cost is either constant or falling for the vast majority of firms.[8] Most recently, former Federal Reserve chair Alan Blinder and colleagues conducted a survey of 200 executives of corporations with sales exceeding$$

## Marginal cost

https://en.wikipedia.org/wiki/Marginal_cost
... $$10). Marginal cost is not the cost of producing the "next" or "last" unit.[5] The cost of the last unit is the same as the cost of the first unit and every other unit. In the short run, increasing production requires using more of the variable input — conventionally assumed to be labor. Adding more labor to a fixed capital stock reduces the marginal product of labor because of the diminishing marginal returns. This reduction in productivity is not limited to the additional labor needed to produce the marginal unit – the productivity of every unit of labor is reduced. Thus the cost of producing the marginal unit of output has two components: the cost associated with producing the marginal unit and the increase in average costs for all units produced due to the "damage" to the entire productive process. The first component is the per-unit or average cost. The second component is the small increase in cost due to the law of diminishing marginal returns which increases the costs of all units sold. Marginal costs can also be expressed as the cost per unit of labor divided by the marginal product of labor.[6] Denoting variable cost as VC, the constant wage rate as w, and labor usage as L, we have MC = \frac{\Delta VC}{\Delta Q} \Delta VC = {w \Delta L} MC = \frac{w \Delta L}{\Delta Q}=\frac{w}{MPL}. Here MPL is the ratio of increase in the quantity produced per unit increase in labour: i.e. \Delta Q/ \Delta L, the marginal product of labor. The last equality holds because \frac{\Delta L}{\Delta Q} is the change in quantity of labor that brings about a one-unit change in output.[7] Since the wage rate is assumed constant, marginal cost and marginal product of labor have an inverse relationship—if the marginal product of labor is decreasing (or, increasing), then marginal cost is increasing (decreasing), and AVC = VC/Q=wL/Q = w/(Q/L) = w/APL Empirical data on marginal cost While neoclassical models broadly assume that marginal cost will increase as production increases, several empirical studies conducted throughout the 20th century have concluded that the marginal cost is either constant or falling for the vast majority of firms.[8] Most recently, former Federal Reserve chair Alan Blinder and colleagues conducted a survey of 200 executives of corporations with sales exceeding$$ 10 million, in which they were asked, among other questions, about the structure of their marginal cost curves. Strikingly, just 11% of respondents answered that their marginal costs increased as production increased, while 48% answered that they were constant, and 41% answered that they were decr...
$$20,000,000 in electricity over its lifetime, with cooling representing 35to 45of the data center's total cost of ownership. Calculations showed that in two years the cost of powering and cooling a server could be equal to the cost of purchasing the server hardware.[84] Research in 2018 has shown that substantial amount of energy could still be conserved by optimizing IT refresh rates and increasing server utilization.[85] In 2011 Facebook, Rackspace and others founded the Open Compute Project (OCP) to develop and publish open standards for greener data center computing technologies. As part of the project Facebook published the designs of its server, which it had built for its first dedicated data center in Prineville. Making servers taller left space for more effective heat sinks and enabled the use of fans that moved more air with less energy. By not buying commercial off-the-shelf servers, energy consumption due to unnecessary expansion slots on the motherboard and unneeded components, such as a graphics card, was also saved.[86] In 2016 Google joined the project and published the designs of its 48V DC shallow data center rack. This design had long been part of Google data centers. By eliminating the multiple transformers usually deployed in data centers, Google had achieved a 30increase in energy efficiency.[87] In 2017 sales for data center hardware built to OCP designs topped$$

## Data center

https://en.wikipedia.org/wiki/Data_center
...in data centers were designed for more than 25 kW and the typical server was estimated to waste about 30% of the electricity it consumed. The energy demand for information storage systems was also rising. A high availability data center was estimated to have a 1 mega watt (MW) demand and consume $$20,000,000 in electricity over its lifetime, with cooling representing 35to 45of the data center's total cost of ownership. Calculations showed that in two years the cost of powering and cooling a server could be equal to the cost of purchasing the server hardware.[84] Research in 2018 has shown that substantial amount of energy could still be conserved by optimizing IT refresh rates and increasing server utilization.[85] In 2011 Facebook, Rackspace and others founded the Open Compute Project (OCP) to develop and publish open standards for greener data center computing technologies. As part of the project Facebook published the designs of its server, which it had built for its first dedicated data center in Prineville. Making servers taller left space for more effective heat sinks and enabled the use of fans that moved more air with less energy. By not buying commercial off-the-shelf servers, energy consumption due to unnecessary expansion slots on the motherboard and unneeded components, such as a graphics card, was also saved.[86] In 2016 Google joined the project and published the designs of its 48V DC shallow data center rack. This design had long been part of Google data centers. By eliminating the multiple transformers usually deployed in data centers, Google had achieved a 30increase in energy efficiency.[87] In 2017 sales for data center hardware built to OCP designs topped$$ 1.2 billion and are expected to reach 6 billion by 2021.[86] Power and cooling analysis at CERN (2010) Power is the largest recurring cost to the user of a data center.[88] Cooling it at or below wastes money and energy.[88] Furthermore, overcooling equipment in environments with a high relative...
$$200,000. However, most decision-makers are not actually risk-neutral and would not consider these equivalent choices.[13] Volatility In finance, volatility is the degree of variation of a trading price over time, usually measured by the standard deviation of logarithmic returns. Modern portfolio theory measures risk using the variance (or standard deviation) of asset prices. The risk is then: \text{R} = \sigma Outcome frequencies Risks of discrete events such as accidents are often measured as outcome frequencies, or expected rates of specific loss events per unit time. When small, frequencies are numerically similar to probabilities, but have dimensions of [1/time] and can sum to more than 1. Typical outcomes expressed this way include:[44] Individual risk - the frequency of a given level of harm to an individual.[45] It often refers to the expected annual probability of death. Where risk criteria refer to the individual risk, the risk assessment must use this metric. Group (or societal risk) – the relationship between the frequency and the number of people suffering harm.[45] Frequencies of property damage or total loss. Frequencies of environmental damage such as oil spills. Relative risk In health, the relative risk is the ratio of the probability of an outcome in an exposed group to the probability of an outcome in an unexposed group. Psychology of risk Fear as intuitive risk assessment People may rely on their fear and hesitation to keep them out of the most profoundly unknown circumstances. Fear is a response to perceived danger. Risk could be said to be the way we collectively measure and share this "true fear"—a fusion of rational doubt, irrational fear, and a set of unquantified biases from our own experience. The field of behavioural finance focuses on human risk-aversion, asymmetric regret, and other ways that human financial behaviour varies from what analysts call "rational". Risk in that case is the degree of uncertainty associated with a return on an asset. Recognizing and respecting the irrational influences on human decision making may do much to reduce disasters caused by naive risk assessments that presume rationality but in fact merely fuse many shared biases. Fear, anxiety and risk According to one set of definitions, fear is a fleeting emotion ascribed to a particular object, while anxiety is a trait of fear (this is referring to "trait anxiety", as distinct from how the term "anxiety" is generally used) that lasts longer and is not attributed to a specific stimulus (these particular definitions are not used by all authors cited on this page).[46] Some studies show a link between anxious behaviour and risk (the chance that an outcome will have an unfavorable result).[47] Joseph Forgas introduced valence based research where emotions are grouped as either positive or negative (Lerner and Keltner, 2000). Positive emotions, such as happiness, are believed to have more optimistic risk assessments and negative emotions, such as anger, have pessimistic risk assessments. As an emotion with a negative valence, fear, and therefore anxiety, has long been associated with negative risk perceptions. Under the more recent appraisal tendency framework of Jennifer Lerner et al., which refutes Forgas' notion of valence and promotes the idea that specific emotions have distinctive influences on judgments, fear is still related to pessimistic expectations.[48] Psychologists have demonstrated that increases in anxiety and increases in risk perception are related and people who are habituated to anxiety experience this awareness of risk more intensely than normal individuals.[49] In decision-making, anxiety promotes the use of biases and quick thinking to evaluate risk. This is referred to as affect-as-information according to Clore, 1983. However, the accuracy of these risk perceptions when making choices is not known.[50] Consequences of anxiety Experimental studies show that brief surges in anxiety are correlated with surges in general risk perception.[50] Anxiety exists when the presence of threat is perceived (Maner and Schmidt, 2006).[49] As risk perception increases, it stays related to the particular source impacting the mood change as opposed to spreading to unrelated risk factors.[50] This increased awareness of a threat is significantly more emphasised in people who are conditioned to anxiety.[51] For example, anxious individuals who are predisposed to generating reasons for negative results tend to exhibit pessimism.[51] Also, findings suggest that the perception of a lack of control and a lower inclination to participate in risky decision-making (across various behavioural circumstances) is associated with individuals experiencing relatively high levels of trait anxiety.[49] In the previous instance, there is supporting clinical research that links emotional evaluation (of control), the anxiety that is felt and the option of risk avoidance.[49] There are various views presented that anxious/fearful emotions cause people to access involuntary responses and judgments when making decisions that involve risk. Joshua A. Hemmerich et al. probes deeper into anxiety and its impact on choices by exploring "risk-as-feelings" which are quick, automatic, and natural reactions to danger that are based on emotions. This notion is supported by an experiment that engages physicians in a simulated perilous surgical procedure. It was demonstrated that a measurable amount of the participants' anxiety about patient outcomes was related to previous (experimentally created) regret and worry and ultimately caused the physicians to be led by their feelings over any information or guidelines provided during the mock surgery. Additionally, their emotional levels, adjusted along with the simulated patient status, suggest that anxiety level and the respective decision made are correlated with the type of bad outcome that was experienced in the earlier part of the experiment.[52] Similarly, another view of anxiety and decision-making is dispositional anxiety where emotional states, or moods, are cognitive and provide information about future pitfalls and rewards (Maner and Schmidt, 2006). When experiencing anxiety, individuals draw from personal judgments referred to as pessimistic outcome appraisals. These emotions promote biases for risk avoidance and promote risk tolerance in decision-making.[51] Dread risk It is common for people to dread some risks but not others: They tend to be very afraid of epidemic diseases, nuclear power plant failures, and plane accidents but are relatively unconcerned about some highly frequent and deadly events, such as traffic crashes, household accidents, and medical errors. One key distinction of dreadful risks seems to be their potential for catastrophic consequences,[53] threatening to kill a large number of people within a short period of time.[54] For example, immediately after the 11 September attacks, many Americans were afraid to fly and took their car instead, a decision that led to a significant increase in the number of fatal crashes in the time period following the 9/11 event compared with the same time period before the attacks.[55][56] Different hypotheses have been proposed to explain why people fear dread risks. First, the psychometric paradigm[53] suggests that high lack of control, high catastrophic potential, and severe consequences account for the increased risk perception and anxiety associated with dread risks. Second, because people estimate the frequency of a risk by recalling instances of its occurrence from their social circle or the media, they may overvalue relatively rare but dramatic risks because of their overpresence and undervalue frequent, less dramatic risks.[56] Third, according to the preparedness hypothesis, people are prone to fear events that have been particularly threatening to survival in human evolutionary history.[57] Given that in most of human evolutionary history people lived in relatively small groups, rarely exceeding 100 people,[58] a dread risk, which kills many people at once, could potentially wipe out one's whole group. Indeed, research found[59] that people's fear peaks for risks killing around 100 people but does not increase if larger groups are killed. Fourth, fearing dread risks can be an ecologically rational strategy.[60] Besides killing a large number of people at a single point in time, dread risks reduce the number of children and young adults who would have potentially produced offspring. Accordingly, people are more concerned about risks killing younger, and hence more fertile, groups.[61] Anxiety and judgmental accuracy The relationship between higher levels of risk perception and "judgmental accuracy" in anxious individuals remains unclear (Joseph I. Constans, 2001). There is a chance that "judgmental accuracy" is correlated with heightened anxiety. Constans conducted a study to examine how worry propensity (and current mood and trait anxiety) might influence college student's estimation of their performance on an upcoming exam, and the study found that worry propensity predicted subjective risk bias (errors in their risk assessments), even after variance attributable to current mood and trait anxiety had been removed.[50] Another experiment suggests that trait anxiety is associated with pessimistic risk appraisals (heightened perceptions of the probability and degree of suffering associated with a negative experience), while controlling for depression.[49] Human factors One of the growing areas of focus in risk management is the field of human factors where behavioural and organizational psychology underpin our understanding of risk based decision making. This field considers questions such as "how do we make risk based decisions?", "why are we irrationally more scared of sharks and terrorists than we are of motor vehicles and medications?" In decision theory, regret (and anticipation of regret) can play a significant part in decision-making, distinct from risk aversion[62][63](preferring the status quo in case one becomes worse off). Framing[64] is a fundamental problem with all forms of risk assessment. In particular, because of bounded rationality (our brains get overloaded, so we take mental shortcuts), the risk of extreme events is discounted because the probability is too low to evaluate intuitively. As an example, one of the leading causes of death is road accidents caused by drunk driving – partly because any given driver frames the problem by largely or totally ignoring the risk of a serious or fatal accident. For instance, an extremely disturbing event (an attack by hijacking, or moral hazards) may be ignored in analysis despite the fact it has occurred and has a nonzero probability. Or, an event that everyone agrees is inevitable may be ruled out of analysis due to greed or an unwillingness to admit that it is believed to be inevitable. These human tendencies for error and wishful thinking often affect even the most rigorous applications of the scientific method and are a major concern of the philosophy of science. All decision-making under uncertainty must consider cognitive bias, cultural bias, and notational bias: No group of people assessing risk is immune to "groupthink": acceptance of obviously wrong answers simply because it is socially painful to disagree, where there are conflicts of interest. Framing involves other information that affects the outcome of a risky decision. The right prefrontal cortex has been shown to take a more global perspective[65] while greater left prefrontal activity relates to local or focal processing.[66] From the Theory of Leaky Modules[67] McElroy and Seta proposed that they could predictably alter the framing effect by the selective manipulation of regional prefrontal activity with finger tapping or monaural listening.[68] The result was as expected. Rightward tapping or listening had the effect of narrowing attention such that the frame was ignored. This is a practical way of manipulating regional cortical activation to affect risky decisions, especially because directed tapping or listening is easily done. Psychology of risk taking A growing area of research has been to examine various psychological aspects of risk taking. Researchers typically run randomised experiments with a treatment and control group to ascertain the effect of different psychological factors that may be associated with risk taking. Thus, positive and negative feedback about past risk taking can affect future risk taking. In an experiment, people who were led to believe they are very competent at decision making saw more opportunities in a risky choice and took more risks, while those led to believe they were not very competent saw more threats and took fewer risks.[69] Other considerations Risk and uncertainty In his seminal work Risk, Uncertainty, and Profit, Frank Knight (1921) established the distinction between risk and uncertainty. Thus, Knightian uncertainty is immeasurable, not possible to calculate, while in the Knightian sense risk is measurable. Another distinction between risk and uncertainty is proposed by Douglas Hubbard:[70][13] Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The "true" outcome/state/result/value is not known. Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example: "There is a 60chance this market will double in five years" Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome. Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses. Example: "There is a 40chance the proposed oil well will be dry with a loss of$$

## Risk

https://en.wikipedia.org/wiki/Risk
... $$200,000. However, most decision-makers are not actually risk-neutral and would not consider these equivalent choices.[13] Volatility In finance, volatility is the degree of variation of a trading price over time, usually measured by the standard deviation of logarithmic returns. Modern portfolio theory measures risk using the variance (or standard deviation) of asset prices. The risk is then: \text{R} = \sigma Outcome frequencies Risks of discrete events such as accidents are often measured as outcome frequencies, or expected rates of specific loss events per unit time. When small, frequencies are numerically similar to probabilities, but have dimensions of [1/time] and can sum to more than 1. Typical outcomes expressed this way include:[44] Individual risk - the frequency of a given level of harm to an individual.[45] It often refers to the expected annual probability of death. Where risk criteria refer to the individual risk, the risk assessment must use this metric. Group (or societal risk) – the relationship between the frequency and the number of people suffering harm.[45] Frequencies of property damage or total loss. Frequencies of environmental damage such as oil spills. Relative risk In health, the relative risk is the ratio of the probability of an outcome in an exposed group to the probability of an outcome in an unexposed group. Psychology of risk Fear as intuitive risk assessment People may rely on their fear and hesitation to keep them out of the most profoundly unknown circumstances. Fear is a response to perceived danger. Risk could be said to be the way we collectively measure and share this "true fear"—a fusion of rational doubt, irrational fear, and a set of unquantified biases from our own experience. The field of behavioural finance focuses on human risk-aversion, asymmetric regret, and other ways that human financial behaviour varies from what analysts call "rational". Risk in that case is the degree of uncertainty associated with a return on an asset. Recognizing and respecting the irrational influences on human decision making may do much to reduce disasters caused by naive risk assessments that presume rationality but in fact merely fuse many shared biases. Fear, anxiety and risk According to one set of definitions, fear is a fleeting emotion ascribed to a particular object, while anxiety is a trait of fear (this is referring to "trait anxiety", as distinct from how the term "anxiety" is generally used) that lasts longer and is not attributed to a specific stimulus (these particular definitions are not used by all authors cited on this page).[46] Some studies show a link between anxious behaviour and risk (the chance that an outcome will have an unfavorable result).[47] Joseph Forgas introduced valence based research where emotions are grouped as either positive or negative (Lerner and Keltner, 2000). Positive emotions, such as happiness, are believed to have more optimistic risk assessments and negative emotions, such as anger, have pessimistic risk assessments. As an emotion with a negative valence, fear, and therefore anxiety, has long been associated with negative risk perceptions. Under the more recent appraisal tendency framework of Jennifer Lerner et al., which refutes Forgas' notion of valence and promotes the idea that specific emotions have distinctive influences on judgments, fear is still related to pessimistic expectations.[48] Psychologists have demonstrated that increases in anxiety and increases in risk perception are related and people who are habituated to anxiety experience this awareness of risk more intensely than normal individuals.[49] In decision-making, anxiety promotes the use of biases and quick thinking to evaluate risk. This is referred to as affect-as-information according to Clore, 1983. However, the accuracy of these risk perceptions when making choices is not known.[50] Consequences of anxiety Experimental studies show that brief surges in anxiety are correlated with surges in general risk perception.[50] Anxiety exists when the presence of threat is perceived (Maner and Schmidt, 2006).[49] As risk perception increases, it stays related to the particular source impacting the mood change as opposed to spreading to unrelated risk factors.[50] This increased awareness of a threat is significantly more emphasised in people who are conditioned to anxiety.[51] For example, anxious individuals who are predisposed to generating reasons for negative results tend to exhibit pessimism.[51] Also, findings suggest that the perception of a lack of control and a lower inclination to participate in risky decision-making (across various behavioural circumstances) is associated with individuals experiencing relatively high levels of trait anxiety.[49] In the previous instance, there is supporting clinical research that links emotional evaluation (of control), the anxiety that is felt and the option of risk avoidance.[49] There are various views presented that anxious/fearful emotions cause people to access involuntary responses and judgments when making decisions that involve risk. Joshua A. Hemmerich et al. probes deeper into anxiety and its impact on choices by exploring "risk-as-feelings" which are quick, automatic, and natural reactions to danger that are based on emotions. This notion is supported by an experiment that engages physicians in a simulated perilous surgical procedure. It was demonstrated that a measurable amount of the participants' anxiety about patient outcomes was related to previous (experimentally created) regret and worry and ultimately caused the physicians to be led by their feelings over any information or guidelines provided during the mock surgery. Additionally, their emotional levels, adjusted along with the simulated patient status, suggest that anxiety level and the respective decision made are correlated with the type of bad outcome that was experienced in the earlier part of the experiment.[52] Similarly, another view of anxiety and decision-making is dispositional anxiety where emotional states, or moods, are cognitive and provide information about future pitfalls and rewards (Maner and Schmidt, 2006). When experiencing anxiety, individuals draw from personal judgments referred to as pessimistic outcome appraisals. These emotions promote biases for risk avoidance and promote risk tolerance in decision-making.[51] Dread risk It is common for people to dread some risks but not others: They tend to be very afraid of epidemic diseases, nuclear power plant failures, and plane accidents but are relatively unconcerned about some highly frequent and deadly events, such as traffic crashes, household accidents, and medical errors. One key distinction of dreadful risks seems to be their potential for catastrophic consequences,[53] threatening to kill a large number of people within a short period of time.[54] For example, immediately after the 11 September attacks, many Americans were afraid to fly and took their car instead, a decision that led to a significant increase in the number of fatal crashes in the time period following the 9/11 event compared with the same time period before the attacks.[55][56] Different hypotheses have been proposed to explain why people fear dread risks. First, the psychometric paradigm[53] suggests that high lack of control, high catastrophic potential, and severe consequences account for the increased risk perception and anxiety associated with dread risks. Second, because people estimate the frequency of a risk by recalling instances of its occurrence from their social circle or the media, they may overvalue relatively rare but dramatic risks because of their overpresence and undervalue frequent, less dramatic risks.[56] Third, according to the preparedness hypothesis, people are prone to fear events that have been particularly threatening to survival in human evolutionary history.[57] Given that in most of human evolutionary history people lived in relatively small groups, rarely exceeding 100 people,[58] a dread risk, which kills many people at once, could potentially wipe out one's whole group. Indeed, research found[59] that people's fear peaks for risks killing around 100 people but does not increase if larger groups are killed. Fourth, fearing dread risks can be an ecologically rational strategy.[60] Besides killing a large number of people at a single point in time, dread risks reduce the number of children and young adults who would have potentially produced offspring. Accordingly, people are more concerned about risks killing younger, and hence more fertile, groups.[61] Anxiety and judgmental accuracy The relationship between higher levels of risk perception and "judgmental accuracy" in anxious individuals remains unclear (Joseph I. Constans, 2001). There is a chance that "judgmental accuracy" is correlated with heightened anxiety. Constans conducted a study to examine how worry propensity (and current mood and trait anxiety) might influence college student's estimation of their performance on an upcoming exam, and the study found that worry propensity predicted subjective risk bias (errors in their risk assessments), even after variance attributable to current mood and trait anxiety had been removed.[50] Another experiment suggests that trait anxiety is associated with pessimistic risk appraisals (heightened perceptions of the probability and degree of suffering associated with a negative experience), while controlling for depression.[49] Human factors One of the growing areas of focus in risk management is the field of human factors where behavioural and organizational psychology underpin our understanding of risk based decision making. This field considers questions such as "how do we make risk based decisions?", "why are we irrationally more scared of sharks and terrorists than we are of motor vehicles and medications?" In decision theory, regret (and anticipation of regret) can play a significant part in decision-making, distinct from risk aversion[62][63](preferring the status quo in case one becomes worse off). Framing[64] is a fundamental problem with all forms of risk assessment. In particular, because of bounded rationality (our brains get overloaded, so we take mental shortcuts), the risk of extreme events is discounted because the probability is too low to evaluate intuitively. As an example, one of the leading causes of death is road accidents caused by drunk driving – partly because any given driver frames the problem by largely or totally ignoring the risk of a serious or fatal accident. For instance, an extremely disturbing event (an attack by hijacking, or moral hazards) may be ignored in analysis despite the fact it has occurred and has a nonzero probability. Or, an event that everyone agrees is inevitable may be ruled out of analysis due to greed or an unwillingness to admit that it is believed to be inevitable. These human tendencies for error and wishful thinking often affect even the most rigorous applications of the scientific method and are a major concern of the philosophy of science. All decision-making under uncertainty must consider cognitive bias, cultural bias, and notational bias: No group of people assessing risk is immune to "groupthink": acceptance of obviously wrong answers simply because it is socially painful to disagree, where there are conflicts of interest. Framing involves other information that affects the outcome of a risky decision. The right prefrontal cortex has been shown to take a more global perspective[65] while greater left prefrontal activity relates to local or focal processing.[66] From the Theory of Leaky Modules[67] McElroy and Seta proposed that they could predictably alter the framing effect by the selective manipulation of regional prefrontal activity with finger tapping or monaural listening.[68] The result was as expected. Rightward tapping or listening had the effect of narrowing attention such that the frame was ignored. This is a practical way of manipulating regional cortical activation to affect risky decisions, especially because directed tapping or listening is easily done. Psychology of risk taking A growing area of research has been to examine various psychological aspects of risk taking. Researchers typically run randomised experiments with a treatment and control group to ascertain the effect of different psychological factors that may be associated with risk taking. Thus, positive and negative feedback about past risk taking can affect future risk taking. In an experiment, people who were led to believe they are very competent at decision making saw more opportunities in a risky choice and took more risks, while those led to believe they were not very competent saw more threats and took fewer risks.[69] Other considerations Risk and uncertainty In his seminal work Risk, Uncertainty, and Profit, Frank Knight (1921) established the distinction between risk and uncertainty. Thus, Knightian uncertainty is immeasurable, not possible to calculate, while in the Knightian sense risk is measurable. Another distinction between risk and uncertainty is proposed by Douglas Hubbard:[70][13] Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The "true" outcome/state/result/value is not known. Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example: "There is a 60chance this market will double in five years" Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome. Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses. Example: "There is a 40chance the proposed oil well will be dry with a loss of$$ 12 million in exploratory drilling costs". In this sense, one may have uncertainty without risk but not risk without uncertainty. We can be uncertain about the winner of a contest, but unless we have some personal stake in it, we have no risk. If we bet money on the outcome of the contest, then we...