History Podcasts

When was AUC first attested?

When was AUC first attested?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Ancient Romans generally identified years by the names of their consuls. When was AUC, ab urbe condita, or any of its various abbreviations first used? It was very popular in the Renaissance but it may have been used in classical times (though I do not know of any cases).

Smart (v.)

Old English smeortan "be painful," from Proto-Germanic *smarta- (source also of Middle Dutch smerten , Dutch smarten , Old High German smerzan , German schmerzen "to pain," originally "to bite"), from PIE *smerd- "pain," which is perhaps an extension of the root *mer- "to rub away to harm." Related: Smarted smarting .

late Old English smeart "painful, severe, stinging causing a sharp pain," related to smeortan (see smart (v.)). Meaning "executed with force and vigor" is from c. 1300. Meaning "quick, active, clever" is attested from c. 1300, from the notion of "cutting" wit, words, etc., or else "keen in bargaining." Meaning "trim in attire" first attested 1718, "ascending from the kitchen to the drawing-room c. 1880" [Weekley]. For sense evolution, compare sharp (adj.).

In reference to devices, the sense of "behaving as though guided by intelligence" (as in smart bomb ) first attested 1972. Smarts "good sense, intelligence," is first recorded 1968 (Middle English had ingeny "intellectual capacity, cleverness" (early 15c.)). Smart cookie is from 1948.

"sharp pain," c. 1200, from smart (adj.). Cognate with Middle Dutch smerte , Dutch smart , Old High German smerzo , German Schmerz "pain."

Nigger (n.)

1786, earlier neger (1568, Scottish and northern England dialect), negar, negur , from French nègre , from Spanish negro (see Negro). From the earliest usage it was "the term that carries with it all the obloquy and contempt and rejection which whites have inflicted on blacks" [cited in Gowers, 1965, probably Harold R. Isaacs]. But as black African inferiority was at one time a near universal assumption in English-speaking lands, the word in some cases could be used without deliberate insult. More sympathetic writers late 18c. and early 19c. seem to have used black (n.) and, after the American Civil War, colored person .

It was also applied by English colonists to the dark-skinned native peoples in India, Australia, Polynesia.

The reclamation of the word as a neutral or positive term in black culture (not universally regarded as a worthwhile enterprise), often with a suggestion of "soul" or "style," is attested first in the U.S. South, later (1968) in the Northern, urban-based Black Power movement. The variant nigga , attested from 1827 (as niggah from 1835), is found usually in situations where blacks use the word. Also compare nigra .

Used in combinations (such as nigger-brown ) since 1840s for various dark brown or black hues or objects euphemistic substitutions (such as Zulu ) began to appear in these senses c. 1917. Brazil nuts were called nigger toes by 1896. Nigger stick "prison guard's baton" is attested by 1971. To work like a nigger "work very hard" is by 1836.

The Anno Domini year numbering was developed by a monk named Dionysius Exiguus in Rome in 525, as an outcome of his work on calculating the date of Easter. In his Easter table the year 532 AD was equated with the regnal year 248 of Emperor Diocletian. The table counted the years starting from the presumed birth of Christ, rather than the accession of the emperor Diocletian on November 20, 284, or as stated by Dionysius: "sed magis elegimus ab incarnatione Domini nostri Jesu Christi annorum tempora praenotare. " Α] It is assumed Dionysius Exiguus intended either 1 AD or 1 BC to be the year of Christ's birth (a "year zero" does not exist in this calendar). It was later calculated (from the historical record of the succession of Roman consuls) that the year 1 AD corresponds to the Roman year DCCLIV ab urbe condita, based on Varro's epoch. This however resulted in that year not corresponding with the lifetimes of historical figures reputed to be alive, or otherwise mentioned in connection with the Christian incarnation, e.g. Herod the Great or Quirinius. Β]

. 1 ab urbe condita = 753 before Christ 2 AUC = 752 BC 3 AUC = 751 BC . 749 AU = 5 BC 750 AU = 4 BC (Death of Herod the Great) 751 AU = 3 BC 752 AU = 2 BC 753 AU = 1 BC 754 AU = 1 Anno Domini 755 AU = 2 AD 759 AU = 6 AD (Quirinius becomes governor of Syria) . 2753 AU = 2000 AD 2765 AU = 2012 AD 2773 AU = 2020 AD

Outliers to the AUC

The outcome of this program will be to analyze the ordering practices of the physicians and determine any outliers. PAMA calls for identification on an annual basis of no more than five percent of the total number of ordering physicians who are outliers. The use of two years of data is required for this analysis. Data collected during the 2020 education and testing period will not be used when identifying outliers.

Outliers will be determined based on low adherence to applicable AUC or comparison to other ordering physicians. Physicians who are found to be outliers will be required to complete prior authorizations for advanced diagnostic imaging services.

The following priority clinical areas will be the focus of the analysis of outliers:

The ABCs of Pharmacokinetics

Pharmacokinetics (PK) is talked about a lot in the HIV community. PK is the study of what the human body does to drugs to get the drug out of the body. The main ways the human body handles drugs are listed below. These are all a part of PK.

Step 1. Drug absorption: This is how the drug enters the blood -- usually from tablets or capsules in the stomach and intestines. For some drugs, the amount of acid in the stomach, or the amount of food in the stomach, really changes the amount of drug that is absorbed. This is the reason that some drugs have "food requirements", or why some drugs have warnings not to take antacids along with the drug. (see Figure 1: Drug Metabolism Pathways):

Step 2. Drug distribution: This is how the drug travels in the bloodstream and how it goes into and comes out of other areas of the body. Did you know that some areas of the body, like the brain and reproductive organs, are specially protected from chemicals (including drugs)? It is hard to measure drug levels in the brain and reproductive organs in people.

One way that drug distribution is studied in people is by finding out what percentage of the drug in the blood is stuck to proteins (called protein binding). This is important because only drug that is free of proteins can travel in and out of other areas of the body to be effective. Protein binding is often studied when a drug is being developed by a drug company. However, protein binding is not routinely studied after that because knowing the total blood concentration (both protein-bound plus protein-free) is generally good enough.

Step 3. Drug metabolism: This is how the body chemically changes a drug -- usually in the intestines and liver. Metabolism involves breaking a drug down or adding a chemical that makes it easier to pass it into urine or stool. A lot of drug-drug interactions happen because one drug interferes with the metabolism of another drug (called inhibition). Inhibition causes higher drug levels. On the other hand, a drug can also speed up the metabolism of another drug (called induction). Induction causes lower drug levels.

The CYP-450 (pronounced "sip") enzyme system is a well-known group of human enzymes that metabolize drugs and chemicals in the body. CYP-450 enzymes are mostly in the intestines and liver.

The CYP-450 enzymes are broken into three families (CYP1, CYP2 and CYP3) (see Figure 2: Antiretrovirals and CYP-450 Isoenzymes). When doctors and pharmacists talk about the CYP-450 system, they often just refer to the system as CYP and drop the 450 part. Within the CYP-450 system, though, there are different enzyme families. To distinguish one family from another, a letter and number are added to CYP (again, dropping the 450 numbering). Some examples of this are CYP1A2, CYP2D6, CYP3A4, etc. (Note how the 450 is dropped, but the CYP remains.)

Pyramid depicts the various CYP 450 enzymes in the body and the drug interactions with antiretrovirals. CYP3A4, shown at the top of the pyramid and largest single part of the pyramid, is very important. Arrows point to the various antiretrovirals and the general affect(s) that medicine has on that enzyme. Note that drugs can be listed as both an inducer and inhibitor and with multiple enzymes.

Each CYP has a different ability to metabolize a given chemical or drug. For example, CYP3A4 is probably the most important drug metabolizing enzyme because it metabolizes the most drugs, including protease inhibitors.

Norvir strongly inhibits CYP3A4 and causes most of the other protease inhibitors to build up in the blood. This is called Norvir boosting. For the protease inhibitors that are boosted by Norvir, the higher blood levels may help the "boosted" drug work better. But, for other drugs that are metabolized by CYP3A4, like cholesterol drugs or erectile dysfunction drugs, Norvir and protease inhibitors may cause undesirable increases in blood concentrations (see Protease Inhibitor article).

Step 4. Drug elimination: This is how the body gets the drug out -- usually by passing the drug into the urine (via the kidneys) or stool (via the liver). Sometimes people have some kidney or liver illness. In these people, the blood level of some drugs may build to very high levels if the drug dose is not reduced (see Figure 1: Drug Metabolism Pathways).

PK Definitions

There are certain terms and tests that researchers or doctors use when they study PK. The following is a summary of these PK measurements and what they mean. Please refer to Figures 3 and 4 for a picture of what all these PK measurements represent.

Above are blood levels (Y-axis) of a drug over time (X-axis) after a patient takes a single dose. In this representation, the patient took the dose at time 0 and would be due for another dose at time 12 (hours). Since the time 0 level is about equal to the time 12 level, the patient is at steady state. For AUC measurements, blood levels are usually collected every hour or so. Figure No. 4 below is another way of looking at these same concepts.

AUC (area-under-the-curve): This is the overall amount of drug in the bloodstream after a dose. AUC studies are often used when researchers are looking for drug-drug or drug-food interactions. The way to get an AUC involves collecting many blood samples (usually every one or two hours) right after a person takes a dose up until the next dose is due. In each blood sample, the concentration of the drug is measured with a machine (discussed later). Then all the drug concentrations are put onto a graph based on the time after the dose that they were collected. A curve is made by connecting the points on the graph. The AUC for that drug is then calculated as the area under this drug concentration curve. An AUC study contains a lot of information about PK. It is probably the best way to understand how people handle a drug (PK).

Cmax (maximum concentration): This is the highest concentration of drug in the blood that is measured after a dose. Cmax usually happens within a few hours after the dose is taken. The time that Cmax happens is referred to as Tmax. For some antiretroviral drugs, a high Cmax is thought to increase the risk of side effects from the drug.

Cmin or trough (pronounced "troff") (minimum concentration): This is the lowest concentration of the drug in the blood that is measured after a dose. It happens right before a patient takes the next usual dose. It is not known for certain, but many people in the HIV community believe that keeping the trough concentration (Cmin) above a certain level is especially important for anti-HIV activity.

Half-life (t ½): This is the amount of time it takes for the drug concentration in the blood to decline by half. The half-life is among the most important PK measurements for how often a drug has to be dosed (once-a-day or twice-a-day, etc).

Steady-state: This means that a person has been on a drug for enough time (usually one to two weeks) so that the drug concentration is not building up in the bloodstream anymore. The time it takes to get to steady-state depends on the half-life of the drug. A drug gets to steady state in about five half-lives.

As an illustration, before a patient reaches steady-state, each additional dose may be building the drug up in the body so each dose would be giving a higher Cmax, Cmin, and AUC. But, at steady-state, every dose would give the same Cmax, Cmin, and AUC in the patient because it is not building up any more.

Adherence: Remarkably, antiretroviral regimens lose effectiveness even with a small drop from perfect (or near-perfect) adherence. For example, going from 95?100% adherence down to 90?95% adherence with protease inhibitors resulted in a drop in effectiveness (viral load below 400) from 81% to 64%. It seems that the usual drug levels are not much higher than what?s needed for sustained efficacy. Additionally, the half-lives of the agents must have been relatively fast, such that the drug exposure fell below a level associated with a high probability of efficacy after the missed dose. Obviously, taking as close to 100% of antiretroviral doses is critically important.

Once-a-day dosing: Once daily combination antiretroviral therapies is a newer concept that is targeted to improve adherence. Several once-daily regimens are now available where all drugs have similar dietary requirements so that the whole regimen can be taken at the same time (see Figure 7: Options for Once-daily Dosing). It should be noted that only approved once-daily combinations should be used at this time (such as Truvada plus Sustiva as initial therapy). Some other antiretrovirals are currently approved for twice-a-day dosing, but they are being studied as once-a-day drugs. These "investigational" regimens should only be used in very controlled settings (like in a study). This is because it is not yet known if "investigational" drugs provide the right amount of drug exposure for effective and safe once-daily dosing (especially if a dose is missed). Which is better -- once-a-day or twice-a-day dosing? The conservative answer is: both. In studies done to date comparing once- to twice-a-day dosing, they come out equal at the end.

Pharmacodynamics (PD): PD is just a fancy term for drug efficacy and toxicity. PD refers to what the drugs do to the human body. For example, HIV drugs cause HIV viral load to decline and CD4 cells to increase. Also, drugs sometimes cause certain side effects and toxicity in the human body.

What's PK Got to Do With It?

PK is studied a lot in HIV and it is important for many reasons.

First of all, the PK of many HIV drugs is really changed by certain things. For example, the blood levels of HIV drugs can be increased or lowered by not following the food requirements with dosing, taking antacids with the drugs, or taking certain other drugs or herbals that cause big inhibition or induction interactions (see metabolism above). It is important to find the dose requirements out so that patients know how best to take the drugs.

Secondly, every person who takes HIV drugs is a bit different in the way their body handles these drugs (absorption, distribution, metabolism, and/or elimination). This means that a patient can have high or low blood levels after taking the same dose just because of the way they handle the drug.

Finally, all of this matters because the levels of drugs in your body affect how well the drug works against the virus or whether the drug might cause side effects. In the case of high levels there could be more side effects. Poor efficacy against HIV could result from low levels. In some special cases, your doctor may think that it might be a good idea to measure the blood levels of your drugs. Based on the result, your doctor may adjust your doses and then re-check your blood levels of drug to try and get them right where they want them. This is called "therapeutic drug monitoring" (TDM).

Measuring Drug Levels

Determining your drug levels from blood samples is usually only done in specialty labs. These labs use machine tests called "high performance liquid chromatography (HPLC or LC)" and sometimes "mass spectrometry (MS)". This is generally how it works: Your blood is collected in a tube. The tube is spun very fast in a centrifuge to get the red blood cells to sink to the bottom of the tube leaving the plasma on the top. This is done because the drug level is actually measured in the plasma.

Once at the lab, the drug needs to be purified from the plasma because the plasma is also full of a lot of other things besides the drug (sort of like filtering the drug out). This "filtering" step usually leaves a liquid with the purified drug in it. This purified drug portion is then put into an HPLC machine that filters the drug to make it even more purified and then pumps the drug to a detector.

There are a lot of different kinds of detectors. The common ones for HIV drugs are a mass spectrometer (MS) and an ultraviolet light absorbance detector (UV). A MS detects drugs according to how heavy it is (and also the positive and negative charge of the drug). The detector gives a signal based on how much drug is there. The signal is compared with signals that the detector gives for known amounts of drug that are also put onto the machine (called a standard curve). This gives the drug level in the patient.

Important Things About PK and TDM

One important point is that TDM is not really useful for nukes in most cases. This is because nukes have three phosphate groups attached while inside cells in order to become active against HIV (called triphosphates). Therefore, the best way to do TDM for nukes would be to measure the nuke-triphosphates that are in cells, not the plasma level of the nuke. But, this is very hard to do, so TDM for nukes is not usually done.

Since nuke-triphosphates inside cells are really important for anti-HIV activity, it is important for researchers to measure the half-life of the triphosphate in patients to understand whether the nuke can be given once a day, twice a day, and so on. For many nukes, the half-life of the triphosphate in cells is quite a bit longer than the half-life in plasma, so the nuke can be given once or twice a day (see Figure 5: Plasma and Intracellular Half-lives of Select NRTIs).

This figure depicts the half-lives of select nukes. Note how the half-life in the plasma is always much shorter than the half-life within the cell. Two examples of this are shown with Viread and Emtriva.

As an example, Ziagen (abacavir) has a fast half-life (about 1.5 hours) in plasma, but the half-life of the triphosphate in cells is about 20 hours. So, abacavir can be given once a day.

On the other hand, PIs and NNRTIs are not chemically changed in cells to become active, so the plasma levels can be used for TDM. But, TDM is not routinely used in the U.S. for several reasons. First, TDM has not really been studied much in patients, so doctors are not yet sure about TDM in all their patients.

Secondly, it is not yet clear exactly how to use the information TDM provides. There are some questions that are still unanswered regarding TDM, including:

  1. What are the target levels for efficacy in patients with resistant viruses? Right now, levels that are recommended in treatment guidelines are only for viruses that are not resistant. If a person has a resistant virus, precisely how much of the drug they should have in their body is unknown.
  2. How is it best to adjust doses to meet targets -- for example, should Norvir boosting be the main way to increase levels for PIs?
  3. Are Cmax levels useful for reducing toxicity?
  4. Is an expert needed to do TDM?
  5. And, should laboratories be required to pass the same quality assurance test to get official approval to do the levels?
This table provides suggested trough (Cmin) concentrations for people taking these drugs who do not have a resistant virus.

Although TDM may not be used routinely in all patients, there are some situations where TDM may be useful. These include: childhood, obesity, very small body size, elderly, pregnancy, liver or kidney diseases, and drug-drug interactions. Also, TDM may be used in patients with an unexpected adverse effect or poor efficacy. For these occasions, as mentioned above, there are suggested target levels for PIs and NNRTIs in situations where there is no drug resistance (see Figure 6: Suggested Minimum Target Trough Concentrations for Persons with Wild-type HIV-1).

Finally, if TDM is to be undertaken, there are some very important things to do. First, if the level is for efficacy, it is very important to get the level as close to the trough as possible. This is the best way to interpret the level.

If the level is for toxicity, and a Cmax is desired, it would be best to watch the dose being taken and to obtain the level thereafter. In general, it is very important to realize that the TDM test completely depends on accurately recording when the patient last took their dose and accurately recording when the blood was collected. Other drugs that might have been taken with the dose should also be recorded. Since the current state of TDM for HIV is in the development phase, it would be best to obtain expert advice if undertaking TDM.


The White Coat Ceremony for incoming medical students began in 1993 when Dr. Arnold P. Gold instituted it at the Columbia University College of Physicians and Surgeons in New York. Dr. Gold initiated the practice because he believed that medical students should recognize the profession’s standards and responsibilities before they began formal training. He said students should declare their commitment and accept their obligation to the profession before starting medical school.

Now, the White Coat Ceremony takes place in 20 countries, including at nearly all AAMC-accredited U.S. medical schools. The Gold Foundation funds the program through grants.

At the AUC White Coat Ceremony, incoming students typically receive their first white coats from AUC faculty as family and friends cheer them on. The entire incoming class then recites the Oath of Physicians, a modern version of the Hippocratic Oath. During the COVID-19 pandemic, AUC is holding virtual ceremonies.

Most medical students wear short, hip-length white coats until they enter residency when coats reach down to the knee. The short length of the white coat worn by medical students and the full-length ones worn by most physicians is a long-standing tradition and a way for patients to identify the role of each care provider, according to Dr. James Feinstein, author of “Short White Coat.”


A work solely intented for masturbatory purposes, is generally not regarded as erotic art, although exceptions exist.

Erotica and pornography are excellent tools to study the rise of new media and new technologies. Printing technology gave rise to erotic fiction and erotic engravings, photography begot erotic photography, film begot erotic film, VCR technology liberated the pornographic film from seedy theatres, the internet thrives on erotic imagery and dating services. Examples abound. Colin Wilson , for example, traces the development of the novel in relation to the human imagination and erotic fiction in his The Misfits, a Study of Sexual Outsiders.

One more way of looking at erotica and pornography (most of the time the terms are interchangeable) is the sexual act becoming aware of itself: nature turning into culture, sex becoming self-referential.

Since pornography and erotica are genres that provoke physical reactions, what I call "body genres", they are generally held to be "low" cultural manifestations. However, I like to believe that this list of writers, visual artists, filmmakers, photographers and publishers prove that works of high quality can be found in these badly regarded and maligned "low" genres.

Erotica also carries the connotation of softcore, whereas pornography carries the connotation of hardcore. [May 2006]

The evolutionary history of human populations in Europe

I review the evolutionary history of human populations in Europe with an emphasis on what has been learned in recent years through the study of ancient DNA. Human populations in Europe ∼430-39kya (archaic Europeans) included Neandertals and their ancestors, who were genetically differentiated from other archaic Eurasians (such as the Denisovans of Siberia), as well as modern humans. Modern humans arrived to Europe by ∼45kya, and are first genetically attested by ∼39kya when they were still mixing with Neandertals. The first Europeans who were recognizably genetically related to modern ones appeared in the genetic record shortly thereafter at ∼37kya. At ∼15kya a largely homogeneous set of hunter-gatherers became dominant in most of Europe, but with some admixture from Siberian hunter-gatherers in the eastern part of the continent. These hunter-gatherers were joined by migrants from the Near East beginning at ∼8-9kya: Anatolian farmers settled most of mainland Europe, and migrants from the Caucasus reached eastern Europe, forming steppe populations. After ∼5kya there was migration from the steppe into mainland Europe and vice versa. Present-day Europeans (ignoring the long-distance migrations of the modern era) are largely the product of this Bronze Age collision of steppe pastoralists with Neolithic farmers.

4. Is the random model the worst possible model?

Not really. A random model is a classifier that predicts an observation as class YES or NO at random. In this case, we are going to have 50% correct predictions. The AUC would be 0.5 and TPR is equal to FPR at all thresholds. But clearly, such a model would not be of much value to us.

The worst possible model would be a model that predict all the class YES observations as class NO and class NO observations as class YES. The AUC of such a model would be 0.

In Conclusion

• Whenever you are building a classification model that predicts probability of an observation belonging to a class, plot its ROC curve to visualise its performance. Do not go with the default threshold of 0.5. Use the ROC curve and domain knowledge to figure out the best threshold for a model.

• When the class distribution in the data is imbalanced or cost of false positives and false negatives is different, ROC curves are very helpful. They help us visualize a trade-off between TPR and FPR and thus help us to arrive at a threshold that minimizes the mis-classification cost.

• The shape of ROC curves contains a lot of information about the predictive power of the model.

• The ROC curves of different models can be compared directly in general or for different thresholds.

• The area under the curve (AUC) can be used as a summary of the model skill and can be used to compare two models.

I hope you’ll be able to able to make better use of ROC curves now. Looking forward to hear from you :)

Watch the video: AUC Executive Master of Business Administration EMBA (August 2022).