Wednesday, July 31, 2019

Essay About Twttin

Hint, Bryon Douglas, who needs to go through his personal journey, has a friend, Mark, who is a tedious person who is endangering Bryon, and causes Bryon to aka a life-changing decision in order to fully mature; I once had to go on a profound personal journey to adapt to middle school. In this â€Å"then† period, Bryon was influenced by Mark. For example, on page 23, Bryon and Mark had the following conversation still in the mood for a little action? ‘Sure' said† By â€Å"action† Mark meant fighting.This shows that Mark influenced Bryon. He was probably so influenced by Mark who was his best friend from childhood. Mark grew into a manipulative and disarming teen. Bryon was doing illegal and irresponsible actions, but he felt bad about hem. However Mark,had no regrets. In Baryon's current lifestyle he is a calm person who stands up for the right and legal things. At this time, he is no longer friends with Mark because Mark was in prison due to Baryon's callin g the cops on Mark for selling drugs.On page 154 Bryon narrated â€Å"l ended up with straight As that semester†¦ † After his transformation, he had better logic. L developed and changed in order to survive middle school by pacing my work. I became more earnest about how much time I have to complete something and when I should start. In conclusion, Bryon matured because he abandoned Mark and made a personal exploration of who he really is; I made a rough choice for the better to adapt to middle school.Byron made his personal journey by calling the police on Mark as an act of self preservation because Byron saw Mark doing something really dangerous, and realized that he did not want to be Mark's friend. Knew it was a bad idea to put things off to the last minute, even though it pained me to give up on my free time. Change is inevitable.

Power and control comparrison Essay

Examine the way Shakespeare presents power in the character and actions of Lady Macbeth. In this Controlled assessment I will be trying my hardest to examine the power, actions and emotions of Lady Macbeth throughout the entire play. ‘Macbeth’ by William Shakespeare is a very unusual play, the characters aren’t part of your expectations especially Lady Macbeth. Shakespeare made a clear opposite feature between Lady Macbeth and Macbeth, which is a very challenging yet effective technique. Shakespeare’s character Lady Macbeth is a very different character, her personality shines throughout the play, Lady Macbeth wasn’t your usual woman in that time period, most of the women then were very obedient, shy and quiet. Most of the women of that time were very obedient towards their fathers and husbands†¦ Well, Lady Macbeth was the complete opposite; she was the dominant person in hers and Mac Beth’s relationship. Lady Macbeth was very controlling of Macbeth, we know this because- â€Å"Infirm of purpose. † This quote suggests that Lady Mac Beth was trying to control Macbeth and show that she has power over him. Lady Macbeth was aiming to be the most encouraging, yet powerful partner, although she ended up acting malicious and self-centred. There is a clear transition between the characters Lady Macbeth and Macbeth; it makes you think about why she was so powerful and why Macbeth was allowing her to overrule his life. Macbeth seems to be very conscious of Lady Macbeth, and her behaviour towards him and others. Macbeth comes across to the audience as quite weak and unstable, we don’t know if it’s because of the way he is treated by Lady Macbeth, or if that’s just his nature. Although, Shakespeare may have written the play to insinuate that Lady Macbeth treats Macbeth the way she does, because he allows it, and he is generally just an easy going character. On the other hand Lady Macbeth gets tired of Macbeth being too easy going towards her and their surroundings; we know this because â€Å"Screw your courage to the sticking place† basically Lady Macbeth said man up, start acting braver & courageous like a ‘real’ man. The audience in the gallery must have all had a different experience watching Shakespeare’s plays, as there were different sections of the Gallery. Although, the audience were probably quite shocked because of how different the story line was. Also because back then all of the characters would have been played by males, which must have made it harder for the audience to feel the emotions of Lady Macbeth and understand how Macbeth really felt when his own wife treated him like her slave. We don’t know if ‘Macbeth’ was influenced on a true story, but Shakespeare wrote it very well, as if sounded and was based out very real, we could easily tell what type of person Lady Macbeth was and how she was treating Macbeth. Lady Macbeth is a very stubborn character, she likes to have everything her way, with no one telling her different. Lady Macbeth was a very foolish selfish woman, she wanted everything for her own wealth and fortune, but she would never do it with her own hands, so she wanted Macbeth to kill the king (King Duncan) so that she could be queen and Macbeth would be king; Lady Macbeth didn’t care that she was killing someone, she just wanted to kill the king, and thought no one would notice or find out it was Macbeth. Macbeth was very hesitant to kill Duncan, as much as he would love to be king, and he was next in throne, he just wasn’t sure if he was the right person for the job. Lady Macbeth wasn’t pleased with Macbeth trying to back out of the situation, we know this because- â€Å"when you durst do it, then you were a man† Lady Macbeth is basically saying to Macbeth, stop worrying and waiting around. just hurry up, just kill the king!! Lady Macbeth was very blunt and truthful to Macbeth, which was sometimes deceived as being nasty and hurtful. How does the poem present power and control. In this essay I will be examining how Sassoon presents power and control in comparison to ‘My last duchess’ and ‘Base details’. Robert browning shows power and control in his poem, ‘my last duchess’. We know this because of this quote in the poem, ‘That’s my last duchess painted on the wall’. This quotation represents power and control, by the way it has been written, ‘that’s my’ implying that he owned the duchess, and that she was his property, but more of an object to show off his power towards women. ‘My last duchess’ seems as if there has been more than one duchess. He seems to have a lot of pride to be more of less a lady’s man. As he is a duke, and a very wealthy man, he’s wary of others and wants people to know his authority. Being a duke he feels as though he can control others, mainly his wife/duchess. Browning shows another side of the duke, which was quite unexpected, especially to the readers; as the duke seems very strong and powerful, although a strike of jealousy shines though, in this quotation, ‘she thanks men’ this to me implies jealousy, as the duke is pointing out that she is talking to other men, which clearly he isn’t impressed with. This shows mainly control, the duke treats his wife like an object, she can’t talk to other men, other than her husband she father. Because of the duke’s controlling outlook on life, he can’t see his wife talking to other men, because I think he fears he could lose her& his pride. In this quotation, browning portrayed the Duke living up to his high authority, ‘I gave commands’ this quotation sums up the dukes attitude towards others, and is probably the best example of power and control in this poem. Back In the 16th century, men had most of the control, so it wasn’t as shocking then as it is now. The duke looks down at others and expects people to bow down to him. Browning has portrayed power in the duke well, as we can clearly see that the duke doesn’t agree with others getting attention or ignoring his authority, so he makes his importance head and clear. Browning makes the readers feel sympathetic towards the duchess in this quotation, ‘She had a heart-how shall I say- too soon made glad, too easily impressed’. In this quotation is tells us more about the duchess, that she is a woman with a heart of gold and wouldn’t want to hurt anyone or get on the wrong side of anyone. By us knowing this about the duchess, it makes us think more about the way she is being treated by the duke, as she doesn’t un-impress anyone, so the duke must get away with murder. The duchess obviously loves the duke, and gets impressed with the smallest of compliments. In this case their relationship clearly shows that Love is blind. ‘My last duchess’ and ‘Macbeth’ are very similar, in that they both examine the different structures of power and control. Browning shows power between a couple, and that the duke has all of the power and control in the relationship, we know this because of the past quotation ‘I gave commands’; which is a very powerful quotation, it makes the readers feel sympathetic towards the duchess and others surrounding the Duke. This quotation also shows that the relationship is built on power and control, the duchess won’t do anything to aggravate the duke, as she is probably scared of the outcome. Shakespeare’s Sassoon examines the theme of power and control in his poem ‘Base Details’. This quotation, ‘And speed glum heroes up the line to death’ represents power and control in that the majors couldn’t care less about the difficult conditions the soldiers have to face up to. It’s clear to me, that although the soldiers should earn full authority, the majors think they are one better than the soldiers to take full authority of them and treat them like dirt. By Sassoon using the specific word ‘death’ it makes us worry that the soldiers, too young or old will eventually be left behind to die. This is the harsh reality of WW1, as there would have been too many weak soldiers who would have been left in harsh conditions to die painfully; as there weren’t as many doctors and nurses on the scene as there are today. Sassoon compares the similar themes between ‘base details’ and ‘Macbeth’. From the quote ‘poor young chap-I’d say I used to know his father well. This shows that the majors don’t care about the man, as they can easily say poor young chap but didn’t seem to help him in any way shape or form, also by saying ‘I used to know his father well’, this shows that the majors are so inconsiderate of others. It’s very selfish that the youngest men are put on the front line, by the majors, it seems as though the majors think that the young men aren’t as experienced to do anything else, so they are put on the most life threatening position.

Tuesday, July 30, 2019

Feminism in a Doll House Essay

Feminism in A Doll House In Henrik Ibsen’s A Doll House Nora Helmer is a prime example of a woman’s role in the 19th century, that being that she was more for show than anything else. Nora’s husband, Torvald, treats his wife like a living doll and uses pet names for her rather than her actual name further establishing her position as nothing more than a toy. For Torvald. Nora’s purpose in her own home is to be subservient in a mental capacity as her husband often regards her more as a child than an adult by punishing her for simple, silly matters such as eating sweets. This treatment, however, is not new for Nora as it is revealed that her father treated her quite similarly. When the play opens Nora has just returned from Christmas shopping and we are given a description of her home, â€Å"A comfortable room, tastefully but not expensive furnished. † (Doll act I). Further explanation reveals details which tell the audience that the financial situation for Nora and Torvald is good. As with most things the Helmer home is nothing more than a facade for Nora. One author says, â€Å" [T]he house is a mere container, or doll’s house, for Nora, who spends her time entertaining or nervously accommodating (as her nickname â€Å"the squirrel† implies) her demanding husband — rather than decorating, designing, or even â€Å"taking charge of† her own life† (Connie Pedoto). It’s from this that the reader first gets the idea that appearance means a lot to the Helmer family. Early on it is made very evident through the conversation with husband that she is meant to be the face of their marriage. Ibsen introduces the fact that Nora is not allowed sweets; something that seems strange in this day and age, but in the 19th century was not uncommon. It shows the power a husband had over his wife in that time as well as the submissive behavior women adopted in order to have a proper marriage. Ibsen also reveals that Nora and Torvald’s children have been raised by a nanny their whole lives further establishing Nora’s role as a trophy. Because Nora has been absent in the role of raising her children it is safe to say that she did not have the ability to be a proper mother, ot because she did not love her children, but because she never had the chance to be a proper mother. When she is showing Torvald all of the wonderful toys she has purchased for their children it is obvious that her excitement stems from the fact that it is the only thing she can do in order to show her love to them. This can be compared to Nora’s belief that money fixes everything and this is shown when the author writes â€Å"Yes, yes, it will. But come here and let me show you what I have bought. And all so cheap! Look, here is a new suit for Ivar, and a sword; and a horse and a trumpet for Bob; and a doll and dolly’s bedstead for Emmy,–they are very plain, but anyway she will soon break them in pieces. And here are dress-lengths and handkerchiefs for the maids; old Anne ought really to have something better† (Doll act I). Though the Helmer’s have not always had money to spend in such a manner it is obvious that Nora has taken this newfound fortune as a way to express her love and gratitude to those around her through gifts rather than words or physical affection. When Mrs. Linde is introduced the readers are given a different view of women in this society. Nora must hide every crime she has committed, whether that be the forgery of her father’s signature or sneaking a macaroon, because she is married, while Mrs. Linde no longer has a husband to answer to. Mrs. Linde is treated quite differently than Nora as she is widowed and because of that the expectations that are placed on married women no longer apply to her. She is given the opportunity to work at a bank in the position that was once held by Krogstad. This is surprising as during this time it was the peak of women’s suffrage and women were not usually allowed to hold positions that a man did let alone replace one. When Nora sees the freedom that her childhood friend has acquired upon losing her husband the resolve she had in keeping her secret about the forgery begins to wane though the audience does not see this until much later. Torvald’s treatment of Nora may seem harsh in comparison to the relationship between husband and wife these days, but at the time it was very common. This is why Nora plays along and enjoys the little games her husband plays with her. At the end of the play Nora’s misdeeds come to light and she is forced to admit what she has done. Trovald reacts as expected and verbally abuses her before deciding that the matter should be forgotten, all is forgiven and they will go back to their normal lives. It is at this point that Nora realizes that the life and marriage she has been fighting so hard to protect by keeping her secret from coming to light is beyond saving. Though she this is not the first time she has defied her husband this is the first time she has done so without trying to conceal the act. When she decides to leave it is obvious that it is a shock to Torvald, whom has always believed his wife to be obedient, especially when he gives her the chance to forget all about it. It is also shocking for the audience as up to this point Nora has made it quite clear that she believes she would die without the financial support of her husband. Nora was a typical wife in the 19th century. Nothing she did was uncommon and the fact that she came to see the truth about her marriage speaks volumes about the times and political issues surrounding women in those times. Joan Tempelton, author of Ibsen’s Women, says â€Å"Nora’s doll house and exit from it have long been principal international symbols for women’s issues† (111). At the beginning of the play the audience would never expect Nora to make such a bold choice as to leave her husband and children, but as other characters are introduced their help or, in Krogstad’s case, blackmail leads her to the decision that she and her family would be better off without her as she has realized her true role in her marriage which is that she doesn’t have one. She is nothing more than a living doll in her own home and it took her husband discovering the truth, that his wife is no as obedient as he believes, to bring her to this conclusion. Mrs. Linde is the opposite of Nora. She shows the freedom s of a woman not married. Though times are hard for her as she was left with no money it is obvious that she prefers it that way. For Mrs. Linde marriage was financial security, but now that that has been taken from her she takes it upon herself to find a job and uses the friendship she has with Nora, whom is still married and subservient to her husband, in the hopes that she will be able to coax Torvald into getting Mrs. Linde a job. Nora does this as a favor to a friend, but when Krogstad threatens to reveal the truth about the forgery Nora is quick to beg Torvald not to give Mrs. Linde the position that formerly belonged to Krogstad. She does his because, as a woman, she knows men to be the more dominant sex and fully expects him to go through with it. It is surprising to see Torvald deny Nora’s request, not because she is his wife, but because Mrs. Linde is a woman and it was no very common in the 19th century to be chosen for a job over a man. Feminism is a large part of Henrik Ibsen’s A Doll House and perfectly portrays the role of women in the 19th century. Through Nora’s journey of self-discovery she realizes that her father had treated her like a delicate china doll just as her husband does now that she is a full grown adult and at the conclusion of the play she takes it upon herself to break from that cycle and become her own person. Like Mrs. Linde Nora takes this opportunity to become her own person and frees herself from her controlling marriage.

Monday, July 29, 2019

Assignment1+2 Essay Example | Topics and Well Written Essays - 1500 words

Assignment1+2 - Essay Example Leadership basically refers to people who have the capacity to bring about changes in other people. Organizations are characterized by the unique culture that is inherent in the overall working of the organization. The fast changing pace of the technology can be observed in the study of the organizations and the changing role of leadership within the organizations. The role of leaders, managers and the administrators of the organizations become more challenging when new strategy and policy decisions are introduced in the organizations. The good leadership ensures effective communication with the employees with shared vision of the new strategy thus promoting better understanding among the employees for easy adaptability to change. Hence, the knowledge of core psycho analytical concepts becomes important tools to the leadership for understanding the organizational behavior that significantly impact group dynamics.. The understanding of psychoanalytical concepts provides invaluable information regarding the socio-psychological factors that adversely affect the performance outcome of the workforce. Unconscious and repression, transference, envy and rivalry are few major concepts that considerably influence the human nature and consequently the behavior of the organization. Freud, the eminent socio-psychologist has contributed extensively towards understanding of these core psychoanalytical ingredients so that the one is better able to analyze human behavior within the prescribed social norms. Social scientists have corroborated that the psycho-analytical perspectives are important tools for understanding the hidden dynamics of human relationship, especially with regard to corporate culture, social defenses, leadership imperatives, motivation and other paradigms associated with organizational behavior (Levinson, 2002; Gabriel, 1999). The understanding equips the leadership with the necessary

Sunday, July 28, 2019

Hazard and vunarability analysis SLP 2 Essay Example | Topics and Well Written Essays - 500 words

Hazard and vunarability analysis SLP 2 - Essay Example per event       809.6    3164.9    56250          Total 55 2 4345 1 24319.1 2 3207790.4 1 1.5 Epidemic Bacterial Infectious Diseases 2    1    534    -          ave. per event       0.5    267    -          Viral Infectious Diseases 1    -    2000000    -          ave. per event       -    2000000    -          Total 3 4 0.5 4 2000267 1 _ 4 3.25 Extreme temperature Heat wave 3    138    18300    -          ave. per event       46    6100    -          Total 3 4 46 3 6100 3 _ 4 2.5 Flood Unspecified 31    12814    7015269    268300          ave. per event       413.4    226299    8654.8          Flash flood 1    21    25807    1950000          ave. per event       21    25807    1950000          General flood 12    197    99266    1814000          ave. per event       16.4    8272.2    151166.7          Storm surge/co astal flood 2    34    384143    7440000          ave. per event       17    192072    3720000          total 46 2 467.8 2 452450 2 5829821.5 1 1.75 Mass movement wet Avalanche 1 4 13    -    -          ave. per event       13    -    -          Landslide 20    989    25706    210000          ave. per event       49.5    1285.3    10500          total 21 3 62.5 3 1285.3 3 10500 2 2.75 Storm Unspecified 24    1890    192814    453500          ave. per event       78.8    8033.9    18895.8          Local storm 6    27    100499    363000          ave.

Saturday, July 27, 2019

Reflecting imaging Essay Example | Topics and Well Written Essays - 2000 words

Reflecting imaging - Essay Example Cameras form the central part in understanding the image formation processes. The analogue forms the start of modern imagery with components remaining the same over the years. The principle of using reflected light from the object still forms the integral part in image formation. Some cameras such as those used at night in infrared radiation detection are one of the few types that do not use the same principle of reflection target object. Significant similarity between human eyes and digital cameras is very clear. Capturing, storage and display of the images has improved overtime with current images store in data form and displayed on the camera.in the article, an explanation of digitisation and the processes involved such as quantisation and sampling has remained elaborated. The article further explains terms and problems associated with this digitisation. Imaging is a form of visual representation of an object or reproduction of the same visual object. Reflection is the property of waves changing direction by bouncing off the surface of object as they cannot pass through or get absorbed. The bouncing of light is the most common and it is important in image formation even to human eyes. Reflected light imaging is the process of forming images when electro-optical waves bounce from the target to the camera or recording device. However, transmission imaging involves waves passing through the object before reaching the camera or recording device. In this imaging technique, the images produced depend on the absorption and reflective properties of the object at the specific wavelength of the incident light. Classification of images leads to an image being a digital or analogue image based their properties. Digital image relates to visual illustration of an object in an electronic form that can remains manipulated and stored by electronic devices, which are 2D image I[r, c]. These kinds of images are tangible but have

Friday, July 26, 2019

Human Sexuality Essay Example | Topics and Well Written Essays - 750 words

Human Sexuality - Essay Example Whether it would be positive or negative, the power of research and its importance, are undeniable and relevant. For any researcher(s), it becomes the question first and foremost, as to what any long term impact the individual(s) work, would have had on the ability for greater understanding. Articles and research papers are often times written by those who seek to research issues and then wish to deliver what they find to the rest of the world. Others may also wish to produce articles themselves, only this time, on assessing the impact of the work of others, such as the case would be with John Bancroft. In his work 'Alfred C. Kinsey and the Politics of Sex Research', Bancroft looks to the work done by Kinsey and the response of others to the research itself. With that being said, in regards to Kinsey, "It was evident from his own research, and has been confirmed in various ways since, that major changes in sexual behavior had been underway through much of the first half of the 20th century," (Bancroft, p.2). This statement in itself, would give credence to the validity of Kinsey's work and show the need for greater study and debate. The article discusses the attention paid towards contraception and how the debate would consider, for instance, the use of contraception and its ability to control the size of average families. From a social standpoint, there would be those that felt it was important to have the opportunity to be aware of such issues and others would think that an issue like sexuality, should be kept in the confides of the home and in the bedroom between a man and woman. While it would have been easy to single out Kinsey for his research, it is important to note that, "He was not the first to report results of sex surveys in the US," (Bancroft, p.3). With that in mind, it would be fair to classify Kinsey as one of many, who would have put together studies on the matter at hand. Author Bancroft asserts, regards to assessing Kinsey's impact, "But one clear part of Kinsey's legacy is that sex became less mysterious," (Bancroft, p.4). When certain things have not been discussed in any great detail before, they can often times seem to some, as being unknown and dangerous even. Kinsey's work would take away the disguise and uncertainty about regular human sexual activity and reveal it to be nothing more, than typical behavior that would occur among healthy human beings. His greatest desire would be to reveal the distinct natures of male and female human beings and how each one would approach sexual intercourse. Sometimes, after research has been performed, the results lead to the production of revised guidelines. This would be true as a result of Kinsey's work. Based on his findings, "The American Law Institute, after much debate, published its revised Model Penal Code in 1995. This was clearly influenced by Kinsey's findings," (Bancroft, p.4). The author further goes on to detail that, as a result of Kinsey's work, the revised Model Penal Code would make it so that such things as being a homosexual, living together when you were not married, as well as the sexual activity of two people who were willing participants, were no longer seen as crimes. With such a revision taking place, it would lead many to consider the influence of Kinsey in the area of sexual activity, to be considerable and not to be

Thursday, July 25, 2019

Common Law Duty of Care and the Liability of Employers for References Essay

Common Law Duty of Care and the Liability of Employers for References - Essay Example At the same time, the university has a duty of care for even the students they do not know in person. It is the first element of duty when it comes to negligence, and must be established by the claimant that it was breeched as a duty of care. In this case, negligence can be the failure of the University to act in a reasonable way that it would have acted in any circumstances. The university has the duty to take liability in case of any violation of the common law duty of care. The law can between the universities and students or employees with no direct relationship, and can be formalized as a social contract. This is the responsibility held by the University to the society. The care of duty arises in cases where an employee is harmed mentally, physically, or economically. A duty care is a duty of taking care, to avoid actions which one can foresee to cause injury to persons who are close and directly affected by the act (Climenson 2010, 30). The University is expected that they carr y their responsibility and that of employees with care. This duty care can be breached through an individual action or failure by the University to act through the activities of the institution. ... Â  A duty of care exists when there is an existing relationship between two parties, the University and the students had a relationship is built on trust. The University students and employees should be qualified when they leave the institution to face potential employees. The breach of conduct comes into perspective when, the standard of care ought to have been taken falls below the standards set. This can be negligence of duty if the students not given the proper training as expected (Efstathios 2006, 49). It is the duty of the University of making sure that the organizations activities are met and the standards of ethical practices followed. The University should make sure that there is a strategy that is reviewed and updated so that necessary actions are taken in the likelihood of a risk. The institution should be aware of the duty of care that collectively or individually they take in relation to the employer and care to the assets and reputation of the employees. The Universit y must act in the interest of the institution, and not for their personal interest or another organization. When the University works and acts conscientiously in carrying out their duties this way, they limit their personal liability in cases of any loss, harm or damage caused through the breach of duty of care (Ian 2007, p 37). An employer’s duty toward their employee’s, in this aspect, the University’s duty to the current students and employees is to provide and maintain; a safe environment for learning of work, a safe system of both learning and working, and provision of fellow competent fellow employees. The university can delegate his duties or functions to nominated employers but can not delegate legal responsibility (James 2006, p 75). Employers can be liable for

Food and Culture Individual Country Project Lab Report

Food and Culture Individual Country Project - Lab Report Example which took place between Mexico and the US, which took place between 1846 and 1848, in which the US was attempting to take control over independent Texas. In the end, the US army defeated the Mexican leading to the signing of the peace treaty between Texas, New Mexico and California in which Mexico lost its land by almost a half. The events that took place in Mexico right from independence fostered the economic, political and social assimilation of different social groups contained by the nation and made the state and nation building stronger. The most relevant civil wars that took place that made a lot of significance in Mexican history are the Mexican Revolution that took place in 1910. The war led to an estimated loss of life of about one million Mexicans. The war ultimately ended with the formation of the new constitution at the beginning 1917, but it still lasted a few decades before peace to finally set in the nation. The reconstruction after the revolution affected all aspects of the society and gave a totally new significance to the nation. Put simply, the Mexican culture simply stands out from other cultures. The differences and variations that one can find in Mexico can be incomprehensible. Mexican people are generally renowned for their artistic and creative nature. In addition, they take such pride when it comes to culinary matters. It is not strange to find people in a hot debate about food. It is what defines them as a culture. The dances are also unique to the nation only, although most modern societies are adopting them and changing them in one way or another (Sanchez, 28). Unlike their neighbors, the dominant language in Mexico is Spanish, which can be said to be as a consequence of being colonized by Spain. Mexico possesses a comprehensive and refined culinary culture, with a vast variety of local dishes. However, there are three main common dishes that constitute the heart of most Mexican foods: beans, corn and hot peppers or as commonly

Wednesday, July 24, 2019

Importance of Strategic Planning and Management in the Business Essay - 1

Importance of Strategic Planning and Management in the Business Environment Paper - Essay Example develop better avenues in the quest of finding competitive offerings that are thrown in line with the products which I would be making for my clientele. Perhaps it would be wise to use localized data so that the name choice for the bakery is appropriate with the customers’ desires. What is more important is the fact that the bakery could deliver the goods when it comes to building a relation with quality, taste and superior service in the first place, to its local customers. Hence this bakery would have a strategic plan in place and the four functions of management would be implemented within its reigns so that success could be achieved for the sake of the business. As far as the strategic planning and management of this business is concerned, the bakery must come about due to the mechanisms that have already been employed right from the very beginning. This bakery might be new to the business but the input should be given significance. If I want to do something different from the other bakeries in the business, I must be given room to maneuver my strengths that I have learned or acquired with the passage of time. I would understand that the selected target market is important more than anything else and looking after their needs would be deemed as quintessential from my bakery’s perspective. What is needed now is to comprehend the fact that research mechanisms are not only addressed in a proper manner but are also incorporated within the working levels of the bakery itself. This would greatly benefit the bakery which is on an upsurge with a new vision in the form of my management and planning skills. I would go for adopting a strategic plan as this is something upon which I can wrest my initiatives which I have already taken by now. This strategic plan would provide me a vision as to what my course of action will be and how I can maneuver my troops within the coming days. It will give me a better understanding of the resources that are available at my

Tuesday, July 23, 2019

Geography is no longer relevant in the context of a homogenising world Essay

Geography is no longer relevant in the context of a homogenising world Discuss - Essay Example However, Dicken (2011: p41) notes that nation-state borders continue to dominate global relations with nations continuing to enforce state-boundaries, sometimes using violence to do so. Moreover, challenges in overcoming economic and technological barriers continue to shape how different populations separated by geographical location access healthcare and education for example. Therefore, although the relevance of geography seems to have been greatly diminished as a result of a homogenising world, this paper will argue that how people live is still significantly influenced by geographical factors. Aiello and Pauwels (2014: p280) support the concept of an increasingly homogenised world, noting that global flows and exchanges of capitals, services, goods, transfer of technology and human movements have resulted in a more unique and standardized world culture as acculturation leads to a universal culture. In this case, increased interconnectivity between cultures and countries contributes to the formation of a more homogenous culture with the adoption of a more Euro-American lifestyle and social organization model. Modern communications have played a fundamental role in homogenisation as the internet enables people to read about information on foreign nations as they would about their own locality. People all over the world are now exposed to the same news every day, leading to a homogenisation of ideas and perspectives. Increased international travel has greatly influenced homogenisation as well, with people from South East Asia, for example, travelling to Europe and North Amer ica to find jobs. Moreover, increased tourist flows, specifically from developed countries, have encouraged hospitality industries across the world to provide typical Euro-American services, contributing to a more homogenous global community (Aiello & Pauwels, 2014: p281). Popular culture has also

Monday, July 22, 2019

Approaches to the Analysis of Survey Data Essay Example for Free

Approaches to the Analysis of Survey Data Essay 1. Preparing for the Analysis 1.1 Introduction This guide is concerned with some fundamental ideas of analysis of data from surveys. The discussion is at a statistically simple level; other more sophisticated statistical approaches are outlined in our guide Modern Methods of Analysis. Our aim here is to clarify the ideas that successful data analysts usually need to consider to complete a survey analysis task purposefully. An ill-thought-out analysis process can produce incompatible outputs and many results that never get discussed or used. It can overlook key findings and fail to pull out the subsets of the sample where clear findings are evident. Our brief discussion is intended to assist the research team in working systematically; it is no substitute for clear-sighted and thorough work by researchers. We do not aim to show a totally naà ¯ve analyst exactly how to tackle a particular set of survey data. However, we believe that where readers can undertake basic survey analysis, our recommendations will help and encourage them to do so better. Chapter 1 outlines a series of themes, after an introductory example. Different data types are distinguished in section 1.2. Section 1.3 looks at data structures; simple if there is one type of sampling unit involved, and hierarchical with e.g. communities, households and individuals. In section 1.4 we separate out three stages of survey data handling – exploration, analysis and archiving – which help to define expectations and procedures for different parts of the overall process. We contrast the research objectives of description or estimation (section 1.5), and of comparison  (section 1.6) and what these imply for analysis. Section 1.7 considers when results should be weighted to represent the population – depending on the extent to which a numerical value is or is not central to the interpretation of survey results. In section 1.8 we outline the coding of non-numerical responses. The use of ranked data is discussed in brief in section 1.9. In Chapter 2 we look at the ways in which researchers usually analyse survey data. We focus primarily on tabular methods, for reasons explained in section 2.1. Simple one-way tables are often useful as explained in section 2.2. Cross-tabulations (section 2.3) can take many forms and we need to think which are appropriate. Section 2.4 discusses issues about ‘accuracy’ in relation to two- and multi-way tables. In section 2.5 we briefly discuss what to do when several responses can be selected in response to one question.  © SSC 2001 – Approaches to the Analysis of Survey Data 5 Cross-tabulations can look at many respondents, but only at a small number of questions, and we discuss profiling in section 2.6, cluster analysis in section 2.7, and indicators in sections 2.8 and 2.9. 1.2 Data Types Introductory Example: On a nominal scale the categories recorded, usually counted, are described verbally. The ‘scale’ has no numerical characteristics. If a single oneway table resulting from simple summarisation of nominal (also called categorical) scale data contains frequencies:Christian Hindu Muslim Sikh Other 29 243 117 86 25 there is little that can be done to present exactly the same information in other forms. We could report highest frequency first as opposed to alphabetic order, or reduce the information in some way e.g. if one distinction is of key importance compared to the others:Hindu Non-Hindu 243 257 On the other hand, where there are ordered categories, the sequence makes sense only in one, or in exactly the opposite, order:Excellent Good Moderate Poor Very Bad 29 243 117 86 25 We could reduce the information by combining categories as above, but also we can summarise, somewhat numerically, in various ways. For example, accepting a degree of arbitrariness, we might give scores to the categories:Excellent Good Moderate Poor Very Bad 5 4 3 2 1 and then produce an ‘average score’ – a numerical indicator – for the sample of:29 Ãâ€" 5 + 243 Ãâ€" 4 + 117 Ãâ€" 3 + 86 Ãâ€" 2 + 25 Ãâ€" 1 29 + 243 + 117 + 86 + 25 = 3.33 This is an analogue of the arithmetical calculation we would do if the categories really were numbers e.g. family sizes. 6  © SSC 2001 – Approaches to the Analysis of Survey Data The same average score of 3.33 could arise from differently patterned data e.g. from rather more extreme results:Excellent Good Moderate Poor Very Bad 79 193 117 36 75 Hence, as with any other indicator, this ‘average’ only represents one feature of the data and several summaries will sometimes be needed. A major distinction in statistical methods is between quantitative data and the other categories exemplified above. With quantitative data, the difference between the values from two respondents has a clearly defined and incontrovertible meaning e.g. â€Å"It is 5C ° hotter now than it was at dawn† or â€Å"You have two more children than your sister†. Commonplace statistical methods provide many well-known approaches to such data, and are taught in most courses, so we give them only passing attention here. In this guide we focus primarily on the other types of data, coded in number form but with less clear-cut numerical meaning, as follows. Binary – e.g. yes/no data – can be coded in 1/0 form; while purely categorical or nominal data – e.g. caste or ethnicity – may be coded 1, 2, 3†¦ using numbers that are just arbitrary labels and cannot be added or subtracted. It is also common to have ordered categorical data, where items may be rated Excellent, Good, Poor, Useless, or responses to attitude statements may be Strongly agree, Agree, Neither agree nor disagree, Disagree, Strongly disagree. With ordered categorical data the number labels should form a rational sequence, because they have some numerical meaning e.g. scores of 4, 3, 2, 1 for Excellent through to Useless. Such data supports limited quantitative analysis, and is often referred to by statisticians as ‘qualitative’ – this usage does not imply that the elicitation procedure must satisfy a purist’s restrictive perception of what constitutes qualitative research methodology. 1.3 Data Structure SIMPLE SURVEY DATA STRUCTURE: the data from a single-round survey, analysed with limited reference to other information, can often be thought of as a ‘flat’ rectangular file of numbers, whether the numbers are counts/measurements, or codes, or a mixture. In a structured survey with numbered questions, the flat file has a column for each question, and a row for each respondent, a convention common to almost all standard statistical packages. If the data form a perfect rectangular grid with a number in every cell, analysis is made relatively easy, but there are many reasons why this will not always be the case and flat file data will be incomplete or irregular. Most importantly:-  © SSC 2001 – Approaches to the Analysis of Survey Data 7 †¢ Surveys often involve ‘skip’ questions where sections are missed out if irrelevant e.g. details of spouse’s employment do not exist for the unmarried. These arise legitimately, but imply different subsets of people respond to different questions. ‘Contingent questions’, where not everyone ‘qualifies’ to answer, often lead to inconsistent-seeming results for this reason. If the overall sample size is just adequate, the subset who ‘qualify’ for a particular set of contingent questions may be too small to analyse in the detail required. †¢ If some respondents fail to respond to some questions (item non-response) there will be holes in the rectangle. Non-informative non-response occurs if the data is missing for a reason unrelated to the true answers e.g. the interviewer turned over two pages instead of one! Informative non-response means that the absence of an answer itself tells you something, e.g. you are almost sure that the missing income value will be one of the highest in the community. A little potentially informative non-response may be ignorable, if there is plenty of data. If data are sparse or if informative  non-response is frequent, the analysis should take account of what can be inferred from knowing that there are informative missing values. HIERARCHICAL DATA STRUCTURE: another complexity of survey data structure arises if the data are hierarchical. A common type of hierarchy is where a series of questions is repeated say for each child in the household, and combined with a household questionnaire, and maybe data collected at community level. For analysis, we can create a rectangular flat file, at the ‘child level’, by repeating relevant household information in separate rows for each child. Similarly, we can summarise information for the children in a household, to create a ‘household level’ analysis file. The number of children in the household is usually a desirable part of the summary; this â€Å"post-stratification† variable can be used to produce sub-group analyses at household level separating out households with different numbers of child members. The way the sampling was done can have an effect on interpretation or analysis of a hierarchical study. For example if children were chosen at random, households with more children would have a greater chance of inclusion and a simple average of the household sizes would be biased upwards: it should be corrected for selection probabilities. Hierarchical structure becomes important, and harder to handle, if there are many levels where data are collected e.g. government guidance and allocations of resource, District Development Committee interpretations of the guidance, Village Task Force selections of safety net beneficiaries, then households and individuals whose vulnerabilities and opportunities are affected by targeting decisions taken at higher levels in the hierarchy. In such cases, a relational database reflecting the hierarchical 8  © SSC 2001 – Approaches to the Analysis of Survey Data structure is a much more desirable way than a spreadsheet to define and retain the inter-relationships between levels, and to create many analysis files at different levels. Such issues are described in the guide The Role of a Database Package for Research Projects. Any one of the analysis files   may be used as we discuss below, but any such study will be looking at one facet of the structure, and several analyses will have to be brought together for an overall interpretation. A more sophisticated approach using multi-level modelling, described in our guide on Modern Methods of Analysis, provides a way to look at several levels together. 1.4 Stages of Analysis It is often worth distinguishing the three stages of exploratory analysis, deriving the main findings, and archiving. EXPLORATORY DATA ANALYSIS (EDA) means looking at the data files, maybe even before all the data has been collected and entered, to get an idea of what is there. It can lead to additional data collection if this is seen to be needed, or savings by stopping collecting data when a conclusion is already clear, or existing results prove worthless. It is not assumed that results from EDA are ready for release as study findings. †¢ EDA usually overlaps with data cleaning; it is the stage where anomalies become evident e.g. individually plausible values may lead to a way-out point when combined with other variables on a scatterplot. In an ideal situation, EDA would end with confidence that one has a clean dataset, so that a single version of the main datafiles can be finalised and ‘locked’ and all published analyses derived from a single consistent form of ‘the data’. In practice later stages of analysis often produce additional queries about data values. †¢ Such exploratory analysis will also show up limitations in contingent questions e.g. we might find we don’t have enough currently married women to analyse their income sources separately by district. EDA should include the final reconciliation of analysis ambitions with data limitations. †¢ This phase can allow the form of analysis to be tried out and agreed, developing analysis plans and program code in parallel with the final data collection, data entry and checking. Purposeful EDA allows the subsequent stage of deriving the main findings to be relatively quick, uncontroversial, and well organised. DERIVING THE MAIN FINDINGS: the second stage will  ideally begin with a clear-cut clean version of the data, so that analysis files are consistent with one another, and any inconsistencies, e.g. in numbers included, can be clearly explained. This is the stage we amplify upon, later in this guide. It should generate the summary  © SSC 2001 – Approaches to the Analysis of Survey Data 9 findings, relationships, models, interpretations and narratives, and recommendations that research users will need to begin utilising the results. first Of course one needs to allow time for ‘extra’ but usually inevitable tasks such as:†¢ follow-up work to produce further more detailed findings, e.g. elucidating unexpected results from the pre-planned work. †¢ a change made to the data, each time a previously unsuspected recording or data entry error comes to light. Then it is important to correct the database and all analysis files already created that involve the value to be corrected. This will mean repeating analyses that have already been done using, but not revealing, the erroneous value. If that analysis was done â€Å"by mouse clicking† and with no record of the steps, this can be very tedious. This stage of work is best undertaken using software that can keep a log: it records the analyses in the form of program instructions that can readily and accurately be re-run. ARCHIVING means that data collectors keep, perhaps on CD, all the non-ephemeral material relating to their efforts to acquire information. Obvious components of such a record include:(i) data collection instruments, (ii) raw data, (iii) metadata recording the what, where, when, and other identifiers of all variables, (iv) variable names and their interpretations, and labels corresponding to values of categorical variables, (v) query programs used to extract analysis files from the database, (vi) log files  defining the analyses, and (vii) reports. Often georeferencing information, digital photographs of sites and scans of documentary material are also useful. Participatory village maps, for example, can be kept for reference as digital photographs. Surveys are often complicated endeavours where analysis covers only a fraction of what could be done. Reasons for developing a good management system, of which the archive is part, include:†¢ keeping the research process organised as it progresses; †¢ satisfying the sponsor’s (e.g. DFID’s) contractual requirement that data should be available if required by the funder or by legitimate successor researchers; †¢ permitting a detailed re-analysis to authenticate the findings if they are questioned; †¢ allowing a different breakdown of results e.g. when administrative boundaries are redefined; †¢ linking several studies together, for instance in longer-term analyses carrying baseline data through to impact assessment. 10  © SSC 2001 – Approaches to the Analysis of Survey Data 1.5 Population Description as the Major Objective In the next section we look at the objective of comparing results from sub-groups, but a more basic aim is to estimate a characteristic like the absolute number in a category of proposed beneficiaries, or a relative number such as the prevalence of HIV seropositives. The estimate may be needed to describe a whole population or sections of it. In the basic analyses discussed below, we need to bear in mind both the planned and the achieved sampling structure. Example: Suppose ‘before’ and ‘after’ surveys were each planned to have a 50:50 split of urban and rural respondents. Even if we achieved 50:50 splits, these would need some manipulation if we wanted to generalise the results to represent an actual population split of 70:30 urban:rural. Say we wanted to assess the change from ‘before’ to ‘after’ and the achieved samples were in fact split 55:45 and 45:55. We would have to correct the  results carefully to get a meaningful estimate of change. Samples are often stratified i.e. structured to capture and represent particular segments of the target population. This may be much more sophisticated than the urban/rural split in the previous paragraph. Within-stratum summaries serve to describe and characterise each of these parts individually. If required by the objectives, overall summaries, which put together the strata, need to describe and characterise the whole population. It may be fine to treat the sample as a whole and produce simple, unweighted summaries if (i) we have set out to sample the strata proportionately, (ii) we have achieved this, and (iii) there are no problems due to hierarchical structure. Nonproportionality arises from various quite distinct sources, in particular:†¢ Case A: often sampling is disproportionate across strata by design, e.g. the urban situation is more novel, complex, interesting or accessible, and gets greater coverage than the fraction of the population classed as rural. †¢ Case B : sometimes particular strata are bedevilled with high levels of nonresponse, so that the data are not proportionate to stratum sizes, even when the original plan was that they should be. If we ignore non-proportionality, a simple-minded summary over all cases is not a proper representation of the population in these instances.  The ‘mechanistic’ response to ‘correct’ both the above cases is (1) to produce withinstratum results (tables or whatever), (2) to scale the numbers in them to represent the true population fraction that each stratum comprises, and then (3) to combine the results.  © SSC 2001 – Approaches to the Analysis of Survey Data 11 There is often a problem with doing this in case B, where non-response is an important part of the disproportionality: the reasons why data are missing from particular strata often correspond to real differences in the behaviour of respondents, especially those omitted or under-sampled, e.g. â€Å"We had very good response rates everywhere except in the north. There a high proportion of the population are nomadic, and we largely failed to find them.† Just  scaling up data from settled northerners does not take account of the different lifestyle and livelihood of the missing nomads. If you have largely missed a complete category, it is honest to report partial results making it clear which categories are not covered and why. One common ‘sampling’ problem arises when a substantial part of the target population is unwilling or unable to cooperate, so that the results in effect only represent a limited subset – those who volunteer or agree to take part. Of course the results are biased towards e.g. those who command sufficient resources to afford the time, or e.g. those who habitually take it upon themselves to represent others. We would be suspicious of any study which appeared to have relied on volunteers, but did not look carefully at the limits this imposed on the generalisability of the conclusions. If you have a low response rate from one stratum, but are still prepared to argue that the data are somewhat representative, the situation is at the very least uncomfortable. Where you have disproportionately few responses, the multipliers used in scaling up to ‘represent’ the stratum will be very high, so your limited data will be heavily weighted in the final overall summary. If there is any possible argument that these results are untypical, it is worthwhile to think carefully before giving them extra prominence in this way. 1.6 Comparison as the Major Objective One sound reason for disproportionate sampling is that the main objective is a comparison of subgroups in the population. Even if one of two groups to be compared is very small, say 10% of the total number in the population, we now want roughly equally many observations from each subgroup, to describe both groups roughly equally accurately. There is no point in comparing a very accurate set of results from one group with a very vague, ill-defined description of the other; the comparison is at least as vague as the worse description. The same broad principle applies whether the comparison is a wholly quantitative one looking at the difference in means of a numerical measure between groups, or a much looser verbal comparison e.g. an assessment of differences in pattern across a range of cross-tabulations. 12  © SSC 2001 – Approaches to the Analysis of Survey Data If for a subsidiary objective we produce an overall summary giving ‘the general picture’ of which both groups are part, 50:50 sampling may need to be re-weighted 90:10 to produce a quantitative overall picture of the sampled population. The great difference between true experimental approaches and surveys is that experiments usually involve a relatively specific comparison as the major objective, while surveys much more often do not. Many surveys have multiple objectives, frequently ill defined, often contradictory, and usually not formally prioritised. Along with the likelihood of some non-response, this tends to mean there is no sampling scheme which is best for all parts of the analysis, so various different weighting schemes may be needed in the analysis of a single survey. 1.7 When Weighting Matters Several times in the above we have discussed issues about how survey results may need to be scaled or weighted to allow for, or ‘correct for’, inequalities in how the sample represents the population. Sometimes this is of great importance, sometimes not. A fair evaluation of survey work ought to consider whether an appropriate tradeoff has been achieved between the need for accuracy and the benefits of simplicity. If the objective is formal estimation, e.g. of total population size from a census of a sample of communities, we are concerned to produce a strictly numerical answer, which we would like to be as accurate as circumstances allow. We should then correct as best we can for a distorted representation of the population in the sample. If groups being formally compared run across several population strata, we should try to ensure the comparison is fair by similar corrections, so that the groups are compared on the basis of consistent samples. In these cases we have to face up to problems such as unusually large weights attached to poorly-responding strata, and we may need to investigate the extent to which the final answer is dubious because of sensitivity to results from such subsamples. Survey findings are often used in ‘less numerical’ ways, where it may not be so important to achieve accurate weighting e.g. â€Å"whatever varieties they grow for sale, a large majority of farm households in Sri Lanka prefer traditional red rice varieties for home consumption because they prefer their flavour†. If this is a clear-cut finding which accords with other information, if it is to be used for a simple decision process, or if it is an interim finding which will prompt further investigation, there is a lot to be said for keeping the analysis simple. Of course it saves time and money. It makes the process of interpretation of the findings more accessible to those not very involved in the study. Also, weighting schemes depend on good information to create the weighting factors and this may be hard to pin down.  © SSC 2001 – Approaches to the Analysis of Survey Data 13 Where we have worryingly large weights, attaching to small amounts of doubtful information, it is natural to want to put limits on, or ‘cap’, the high weights, even at the expense of introducing some bias, i.e. to prevent any part of the data having too much impact on the result. The ultimate form of capping is to express doubts about all the data, and to give equal weight to every observation. The rationale, not usually clearly stated, even if analysts are aware they have done this, is to minimise the maximum weight given to any data item. This lends some support to the common practice of analysing survey data as if they were a simple random sample from an unstructured population. For ‘less numerical’ usages, this may not be particularly problematic as far as simple description is concerned. Of course it is wrong – and may be very misleading – to follow this up by calculating standard deviations and making claims of accuracy about the results which their derivation will not sustain! 1.8 Coding We recognise that purely qualitative researchers may prefer to use qualitative analysis methods and software, but where open-form and other verbal responses occur alongside numerical data it is often sensible to use a quantitative tool. From the statistical viewpoint, basic coding implies that we have material, which can be put into nominal-level categories. Usually this is recorded in verbal or pictorial form, maybe on audio- or videotape, or written down by interviewers or self-reported. We would advocate computerising the raw data, so it is archived. The following refers to extracting codes, usually describing the routine comments, rather than unique individual ones which can be used for subsequent qualitative analysis. By scanning the set of responses, themes are developed which reflect the items noted in the material. These should reflect the objectives of the activity. It is not necessary to code rare, irrelevant or uninteresting material. In the code development phase, a large enough range of the responses is scanned to be reasonably sure that commonly occurring themes have been noted. If previous literature, or theory, suggests other themes, these are noted too. Ideally, each theme is broken down into unambiguous, mutually exclusive and exhaustive, categories so that any response segment can be assigned to just one, and assigned the corresponding code value. A ‘codebook’ is then prepared where the categories are listed and codes assigned to them. Codes do not have to be consecutive numbers. It is common to think of codes as presence/absence markers, but there is no intrinsic reason why they should not be graded as ordered categorical variables if appropriate, e.g. on a scale such as fervent, positive, uninterested/no opinion, negative. 14  © SSC 2001 – Approaches to the Analysis of Survey Data The entire body of material is then reviewed and codes are recorded. This may be in relevant places on questionnaires or transcripts. Especially when looking at ‘new’ material not used in code development, extra items may arise and need to be added to the codebook. This may mean another pass through material already reviewed, to add new codes e.g. because a  particular response is turning up more than expected. From the point of view of analysis, no particular significance attaches to particular numbers used as codes, but it is worth bearing in mind that statistical packages are usually excellent at sorting, selecting or flagging, for example, ‘numbers between 10 and 19’ and other arithmetically defined sets. If these all referred to a theme such as ‘forest exploitation activities of male farmers’ they could easily be bundled together. It is of course impossible to separate out items given the same code, so deciding the right level of coding detail is essential at an early stage in the process. When codes are analysed, they can be treated like other nominal or ordered categorical data. The frequencies of different types of response can be counted or cross-tabulated. Since they often derive from text passages and the like, they are often particularly well-adapted for use in sorting listings of verbal comments – into relevant bundles for detailed non-quantitative analysis. 1.9 Ranking Scoring A common means of eliciting data is to ask individuals or groups to rank a set of options. The researchers’ decision to use ranks in the first place means that results are less informative than scoring, especially if respondents are forced to choose between some nearly-equal alternatives and some very different ones. A British 8-year-old offered baked beans on toast, or fish and chips, or chicken burger, or sushi with hot radish might rank these 1, 2, 3, 4 but score them 9, 8.5, 8, and 0.5 on a zero to ten scale! Ranking is an easy task where the set of ranks is not required to contain more than about four or five choices. It is common to ask respondents to rank, say, their best four from a list of ten, with 1 = best, etc. Accepting a degree of arbitrariness, we would usually replace ranks 1, 2, 3, 4, and a string of blanks by pseudo-scores 4, 3, 2, 1, and a string of zeros, which gives a complete array of numbers we can summarise – rather than a sparse array where we don’t know how to handle the blanks. A project output paper†  available on the SSC website explores this in more detail. †  Converting Ranks to Scores for an ad hoc Assessment of Methods of Communication Available to Farmers by Savitri Abeyasekera, Julie  Lawson-Macdowell Ian Wilson. This is an output from DFID-funded work under the Farming Systems Integrated Pest Management Project, Malawi and DFID NRSP project R7033, Methodological Framework for Combining Qualitative and Quantitative Survey Methods.  © SSC 2001 – Approaches to the Analysis of Survey Data 15 Where the instructions were to rank as many as you wish from a fixed, long list, we would tend to replace the variable length lists of ranks with scores. One might develop these as if respondents each had a fixed amount, e.g. 100 beans, to allocate as they saw fit. If four were chosen these might be scored 40, 30, 20, 10, or with five chosen 30, 25, 20, 15, 10, with zeros again for unranked items. These scores are arbitrary e.g. 40, 30, 20, 10 could instead be any number of choices e.g. 34, 28, 22, 16 or 40, 25, 20, 15; this reflects the rather uninformative nature of rankings, and the difficulty of post hoc construction of information that was not elicited effectively in the first place. Having reflected and having replaced ranks by scores we would usually treat these like any other numerical data, with one change of emphasis. Where results might be sensitive to the actual values attributed to ranks, we would stress sensitivity analysis more than with other types of numerical data, e.g. re-running analyses with (4, 3, 2, 1, 0, 0, †¦) pseudo-scores replaced by (6, 4, 2, 1, 0, 0 , †¦). If the interpretations of results are insensitive to such changes, the choice of scores is not critical. 16  © SSC 2001 – Approaches to the Analysis of Survey Data 2. Doing the Analysis 2.1 Approaches Data listings are readily produced by database and many statistical packages. They are generally on a case-by-case basis, so are particularly suitable in  EDA as a means of tracking down odd values, or patterns, to be explored. For example, if material is in verbal form, such a listing can give exactly what every respondent was recorded as saying. Sorting these records – according to who collected them, say – may show up great differences in field workers’ aptitude, awareness or approach. Data listings can be an adjunct to tabulation: in Excel, for example, the Drill Down feature allows one to look at the data from individuals who appear together in a single cell. There is a place for the use of graphical methods, especially for presentational purposes, where simple messages need to be given in easily understood, and attentiongrabbing form. Packages offer many ways of making results bright and colourful, without necessarily conveying more information or a more accurate understanding. A few basic points are covered in the guide on Informative Presentation of Tables, Graphs and Statistics. Where the data are at all voluminous, it is a good idea selectively to tabulate most ‘qualitative’ but numerically coded data i.e. the binary, nominal or ordered categorical types mentioned above. Tables can be very effective in presentations if stripped down to focus on key findings, crisply presented. In longer reports, a carefully crafted, well documented, set of cross-tabulations is usually an essential component of summary and comparative analysis, because of the limitations of approaches which avoid tabulation:†¢ Large numbers of charts and pictures can become expensive, but also repetitive, confusing and difficult to use as a source of detailed information. †¢ With substantial data, a purely narrative full description will be so long-winded and repetitive that readers will have great difficulty getting a clear picture of what the results have to say. With a briefer verbal description, it is difficult not to be overly selective. Then the reader has to question why a great deal went into collecting data that merits little description, and should question the impartiality of the reporting. †¢ At the other extreme, some analysts will skip or skimp the tabulation stage and move rapidly to complex statistical modelling. Their findings are just as much to be distrusted! The models may be based on preconceptions rather than evidence, they may fit badly and conceal important variations in the underlying patterns.  © SSC 2001 – Approaches to the Analysis of Survey Data 17 †¢ In terms of producing final outputs, data listings seldom get more than a place in an appendix. They are usually too extensive to be assimilated by the busy reader, and are unsuitable for presentation purposes. 2.2 One-Way Tables The most straightforward form of analysis, and one that often supplies much of the basic information need, is to tabulate results, question by question, as ‘one-way tables’. Sometimes this can be done using an original questionnaire and writing on it the frequency or number of people who ‘ticked each box’. Of course this does not identify which respondents produced particular combinations of responses, but this is often a first step where a quick and/or simple summary is required. 2.3 Cross-Tabulation: Two-Way Higher-Way Tables At the most basic level, cross-tabulations break down the sample into two-way tables showing the response categories of one question as row headings, those of another question as column headings. If for example each question has five possible answers the table breaks the total sample down into 25 subgroups. If the answers are subdivided e.g. by sex of respondent, there will be one three-way table, 5x5x2, probably shown on the page as separate two-way tables for males and for females. The total sample size is now split over 50 categories and the degree to which the data can sensibly be disaggregated will be constrained by the total number of respondents represented. There are usually many possible two-way tables, and even more three-way tables. The main analysis needs to involve careful thought as to which ones are necessary, and how much detail is needed. Even after deciding that we want some cross-tabulation with categories of ‘question J’ as rows and ‘question K’ as columns, there are several other  decisions to be made: †¢ The number in the cells of the table may be just the frequency i.e. the number of respondents who gave that combination of answers. This may be rephrased as a proportion or a percentage of the total. Alternatively, percentages can be scaled so they total 100% across each row or down each column, so as to make particular comparisons clearer. †¢ The contents of a cell can equally well be a statistic derived from one or more other questions e.g. the proportion of the respondents falling in that cell who were economically-active women. Often such a table has an associated frequency table to show how many responses went in to each cell. If the cell frequencies represent 18  © SSC 2001 – Approaches to the Analysis of Survey Data small subsamples the results can vary wildly, just by chance, and should not be over-interpreted. †¢ Where interest focuses mainly on one ‘area’ of a two-way table it may be possible to combine rows and columns that we don’t need to separate out, e.g. ruling party supporters vs. supporters of all other parties. This simplifies interpretation and presentation, as well as reducing the impact of chance variations where there are very small cell counts. †¢ Frequently we don’t just want the cross-tabulation for ‘all respondents’. We may want to have the same table separately for each region of the country – described as segmentation – or for a particular group on whom we wish to focus such as ‘AIDS orphans’ – described as selection. †¢ Because of varying levels of success in covering a population, the response set may end up being very uneven in its coverage of the target population. Then simply combining over the respondents can mis-represent the intended population. It may be necessary to show the patterns in tables, sub-group by sub-group to convey the whole picture. An alternative, discussed in Part 1, is to weight up the results from the sub-groups to give a fair representation of the whole. 2.4 Tabulation the Assessment of Accuracy Tabulation is usually purely descriptive, with limited effort made to assess the ‘accuracy’ of the numbers tabulated. We caution that confidence intervals are sometimes very wide when survey samples have been disaggregated into various subgroups: if crucial decisions hang on a few numbers it may well be worth putting extra effort into assessing – and discussing – how reliable these are. If the uses intended for various tables are not very numerical or not very crucial, it is likely to cause unjustifiable delay and frustration to attempt to put formal measures of precision on the results. Usually, the most important considerations in assessing the ‘quality’ or ‘value’ or ‘accuracy’ of results are not those relating to ‘statistical sampling variation’, but those which appraise the following factors and their effects:†¢ evenness of coverage of the target (intended) population †¢ suitability of the sampling scheme reviewed in the light of field experience and findings †¢ sophistication and uniformity of response elicitation and accuracy of field recording †¢ efficacy of measures to prevent, compensate for, and understand non-response †¢ quality of data entry, cleaning and metadata recording †¢ selection of appropriate subgroups in analysis  © SSC 2001 – Approaches to the Analysis of Survey Data 19 If any of the above factors raises important concerns, it is necessary to think hard about the interpretation of ‘statistical’ measures of precision such as standard errors. A factor that has uneven effects will introduce biases, whose size and detectability ought to be dispassionately appraised and reported with the conclusions. Inferential statistical procedures can be used to guide generalisations from the sample to the population, where a  survey is not badly affected by any of the above. Inference addresses issues such as whether apparent patterns in the results have come about by chance or can reasonably be taken to reflect real features of the population. Basic ideas are reviewed in Understanding Significance: the Basic Ideas of Inferential Statistics. More advanced approaches are described in Modern Methods of Analysis. Inference is particularly valuable, for instance, in determining the appropriate form of presentation of survey results. Consider an adoption study, which examined socioeconomic factors affecting adoption of a new technology. Households are classified as male or female headed, and the level of education and access to credit of the head is recorded. At its most complicated the total number of households in the sample would be classified by adoption, gender of household head, level of education and access to credit resulting in a 4-way table. Now suppose, from chi-square tests we find no evidence of any relationship between adoption and education or access to credit. In this case the results of the simple twoway table of adoption by gender of household head would probably be appropriate. If on the other hand, access to credit were the main criterion affecting the chance of adoption and if this association varied according to the gender of the household head, the simple two-way table of adoption by gender would no longer be appropriate and a three-way table would be necessary. Inferential procedures thus help in deciding whether presentation of results should be in terms of one-way, two-way or higher dimensional tables. Chi-square tests are limited to examining association in two-way tables, so have to be used in a piecemeal fashion for more complicated situations like that above. A more general way to examine tabulated data is to use log-linear models described in Modern Methods of Analysis. 2.5 Multiple Response Data Surveys often contain questions where respondents can choose a number of relevant responses, e.g. 20  © SSC 2001 – Approaches to the Analysis of Survey Data If you are not using an improved fallow on any of your land, please tick from the list below, any reasons that apply to you:(i) Don’t have any land of my own (ii) Do not have any suitable crop for an improved fallow (iii) Can not afford to buy the seed or plants (iv) Do not have the time/labour There are three ways of computerising these data. The simplest is to provide as many columns as there are alternatives. This is called a multiple dichotomy†, because there is a yes/no (or 1/0) response in each case indicating that the respondent ticked/did not tick each item in the list. The second way is to find the maximum number of ticks from anyone and then have this number of columns, entering the codes for ticked responses, one per column. This is known as â€Å"multiple response† data. This is a useful method if the question asks respondents to put the alternatives in order of importance, because the first column can give the most important reason, and so on. A third method is to have a separate table for the data, with just 2 columns. The first identifies the person and the second gives their responses. There are as many rows of data as there are reasons. There is no entry for a  person who gives no reasons. Thus, in this third method the length of the columns is equal to the number of responses rather than the number of respondents. If there are follow-up questions about each reason, the third method above is the obvious way to organise the data, and readers may identify the general concept as being that of data at another level, i.e. the reason level. More information on organising this type of data is provided in the guide The Role of a Database Package for Research Projects. Essentially such data are analysed by building up counts of the numbers of mentions of each response. Apart from SPSS, few standard statistics packages have any special facilities for processing multiple response and multiple dichotomy data. Almost any package can be used with a little ingenuity, but working from first principles is a timeconsuming business. On our web site we describe how Excel may be used. 2.6 Profiles Usually the questions as put to respondents in a survey need to represent ‘atomic’ facets of an issue, expressed in concrete terms and simplified as much as possible, so that there is no ambiguity and so they will be consistently interpreted by respondents.  © SSC 2001 – Approaches to the Analysis of Survey Data 21 Basic cross-tabulations are based on reporting responses to such individual questions and are therefore narrowly issue-specific. A rather different approach is needed if the researchers’ ambitions include taking an overall view of individual, or small groups’, responses as to their livelihood, say. Cross-tabulations of individual questions are not a sensible approach to ‘people-centred’ or ‘holistic’ summary of results. Usually, even when tackling issues a great deal less complicated than livelihoods, the more important research outputs are ‘complex molecules’ which bring together  responses from numerous questions to produce higher-level conclusions described in more abstract terms. For example several questions may each enquire whether the respondent follows a particular recommendation, whereas the output may be concerned with overall ‘compliance’ – the abstract concept behind the questioning. A profile is a description synthesising responses to a range of questions, perhaps in terms of a set of abstract nouns like compliance. It may describe an individual, cluster of respondents or an entire population. One approach to discussing a larger concept is to produce numerous cross-tabulations reflecting actual questions and to synthesise their information content verbally. This tends to lose sight of the ‘profiling’ element: if particular groups of respondents tend to reply to a range of questions in a similar way, this overall grouping will often come out only weakly. If you try to follow the group of individuals who appear together in one corner cell of the first cross-tab, you can’t easily track whether they stay together in a cross-tab of other variables. Another type of approach may be more constructive: to derive synthetic variables – indicators – which bring together inputs from a range of questions, say into a measure of ‘compliance’, and to analyse those, by cross-tabulation or other methods. See section 2.8 below. If we have an analysis dataset with a row for each respondent and a column for each question, the derivation of a synthetic variable just corresponds to adding an extra column to the dataset. This is then used in analysis just like any other column. A profile for an individual will often comprise a set of values of a suite of indicators. 2.7 Looking for Respondent Groups Profiling is often concerned with acknowledging that respondents are not just a homogeneous mass, and distinguishing between different groups of respondents. Cluster analysis is a data-driven statistical technique that can draw out – and thence characterise – groups of respondents whose response profiles are similar to one another. The response profiles may serve to differentiate one group from another if they are somewhat distinct. This might be needed if the aim were, say, to define 22  © SSC 2001 – Approaches to the Analysis of Survey Data target groups for distinct safety net interventions. The analysis could help clarify the distinguishing features of the groups, their sizes, their distinctness or otherwise, and so on. Unfortunately there is no guarantee that groupings derived from data alone will make good sense in terms of profiling respondents. Cluster analysis does not characterise the groupings; you have to study each cluster to see what they have in common. Nor does it prove that they constitute suitable target groups for meaningful development interventions Cluster analysis is thus an exploratory technique, which may help to screen a large mass of data, and prompt more thoughtful analysis by raising questions such as:†¢ Is there any sign that the respondents do fall into clear-cut sub-groups? †¢ How many groups do there seem to be, and how important are their separations? †¢ If there are distinct groups, what sorts of responses do â€Å"typical† group members give? 2.8 Indicators Indicators are summary measures. Magazines provide many examples, e.g. an assessment of personal computers may give a score in numerical form like 7 out of 10 or a pictorial form of quality rating, e.g. Very good Good Moderate à  Poor Very Poor à ® This review of computers may give scores – indicators – for each of several characteristics, where the maximum score for each characteristic reflects its importance e.g. for one model:- build quality (7/10), screen quality (8/20), processor speed (18/30), hard disk capacity (17/20) and software provided (10/20). The maximum score over all characteristics in the summary indicator is in this case (10 + 20 + 30 + 20 + 20) = 100, so the total score for each computer is a percentage e.g. above (7 + 8 + 18 + 17 + 10) = 60%. The popularity of such summaries demonstrates that readers find them accessible, convenient and to a degree useful. This is either because there is little time to absorb detailed information, or because the indicators provide a baseline from which to weigh up the finer points. Many disciplines of course are awash with suggested indicators from simple averages to housing quality measures, social capital assessment tools, or quality-adjusted years of life. Of course new indicators should be developed only if others do nor exist or are unsatisfactory. Well-understood, well-validated indicators, relevant to the situation in hand are quicker and more cost-effective to use. Defining an economical set of meaningful indicators before data collection ought ideally to imply that at  © SSC 2001 – Approaches to the Analysis of Survey Data 23 analysis, their calculation follows a pre-defined path, and the values are readily interpreted and used. Is it legitimate to create new indicators after data collection and during analysis? This is to be expected in genuine ‘research’ where fieldwork approaches allow new ideas to come forward e.g. if new lines of questioning have been used, or if survey findings take the researchers into areas not  well covered by existing indicators. A study relatively early on in a research cycle, e.g. a baseline survey, can fall into this category. Usually this means the available time and data are not quite what one would desire in order to ensure well-understood, well-validated indicators emerge in final form from the analysis. Since the problem does arise, how does the analyst best face up to it? It is important not to create unnecessary confusion. An indicator should synthesise information and serve to represent a reasonable measure of some issue or concept. The concept should have an agreed name so that users can discuss it meaningfully e.g. ‘compliance’ or ‘vulnerability to flooding’. A specific meaning is attached to the name, so it is important to realise that the jargon thus created needs careful explanation to ‘outsiders’. Consultation or brainstorming leading to a consensus is often desirable when new indicators are created. Indicators created ‘on the fly’ by analysts as the work is rushed to a conclusion are prone to suffer from their hasty introduction, then to lead to misinterpretation, often over-interpretation, by enthusiast would-be users. It is all too easy for a little information about a small part of the issue to be taken as ‘the’ answer to ‘the problem’! As far as possible, creating indicators during analysis should follow the same lines as when the process is done a priori i.e. (i) deciding on the facets which need to be included to give a good feel for the concept, (ii) tying these to the questions or observations needed to measure these facets, (iii) ensuring balanced coverage, so that the right input comes from each facet, (iv) working out how to combine the information gathered into a synthesis which everyone agrees is sensible. These are all parts of ensuring face (or content) validity as in the next section. Usually this should be done in a simple enough way that the user community are all comfortable with the definitions of what is measured. There is some advantage in creating indicators when datasets are already available. You can look at how well the indicators serve to describe the relevant issues and groups, and select the most effective ones. Some analysts rely too much on data reduction techniques such as factor analysis or cluster analysis as a substitute for thinking hard about the issues. We argue that an intellectual process of indicator development should build on, or dispense with, more data-driven approaches. 24  © SSC 2001 – Approaches to the Analysis of Survey Data Principal component analysis is data-driven, but readily provides weighted averages. These should be seen as no more than a foundation for useful forms of indicator. 2.9 Validity The basic question behind the concept of validity is whether an indicator measures what we say or believe it does. This may be quite a basic question if the subject matter of the indicator is visible and readily understood, but the practicalities can be more complex in mundane, but sensitive, areas such as measurement of household income. Where we consider issues such as the value attached to indigenous knowledge the question can become very complex. Numerous variations on the validity theme are discussed extensively in social science research methodology literature. Validity takes us into issues of what different people understand words to mean, during the development of the indicator and its use. It is good practice to try a variety of approaches with a wide range of relevant people, and carefully compare the interpretations, behaviours and attitudes revealed, to make sure there are no major discrepancies of understanding. The processes of comparison and reflection, then the redevelopment of definitions, approaches and research instruments, may all be encompassed in what is sometimes called triangulation – using the results of different approaches to synthesise robust, clear, and easily interpreted results. Survey instrument or indicator validity is a discussion topic, not a statistical measure, but two themes with which statistical survey analysts regularly need to engage are the following. Content (or face) validity looks at the extent to which the questions in a survey, and the weights the results are given in a set of indicators, serve to cover in a balanced way the important facets of the notion the indicator is supposed to represent. Criterion validity can look at how the observed values of the indicator tie up with something readily  measurable that they should relate to. Its aim is to validate a new indicator by reference to something better established, e.g. to validate a prediction retrospectively against the actual outcome. If we measure an indicator of ‘intention to participate’ or ‘likelihood of participating’ beforehand, then for the same individuals later ascertain whether they did participate, we can check the accuracy of the stated intentions, and hence the degree of reliance that can in future be placed on the indicator. As a statistical exercise, criterion validation has to be done through sensible analyses of good-quality data. If the reason for developing the indicator is that there is no satisfactory way of establishing a criterion measure, criterion validity is not a sensible approach.  © SSC 2001 – Approaches to the Analysis of Survey Data 25 2.10 Summary In this guide we have outlined general features of survey analysis that have wide application to data collected from many sources and with a range of different objectives. Many readers of this guide should be able to use its suggestions unaided. We have pointed out ideas and methods which do not in any way depend on the analyst knowing modern or complicated statistical methods, or having access to specialised or expensive computing resources. The emphasis has been on the importance of preparing the appropriate tables to summarise the information. This is not to belittle the importance of graphical display, but that is at the presentation stage, and the tables provide the information for the graphs. Often key tables will be in the text, with larger, less important tables in Appendices. Often a pilot study will have indicated the most important tables to be produced initially. What then takes time is to decide on exactly the right tables. There are three main issues. The first is to decide on what is to be tabulated, and we have considered tables involving either individual questions or indicators. The second is the complexity of table that is  required – one-way, two-way or higher. The final issue is the numbers that will be presented. Often they will be percentages, but deciding on the most informative base, i.e. what is 100% is also important. 2.11 Next Steps We have mentioned the role of more sophisticated methods. Cluster analysis may be useful to indicate groups of respondents and principal components to identify datadriven indicators. Examples of both methods are in our Modern Methods of Analysis guide where we emphasise, as here, that their role is usually exploratory. When used, they should normally be at the start of the analysis, and are primarily to assist the researcher, rather than as presentations for the reader. Inferential methods are also described in the Modern Methods guide. For surveys, they cannot be as simple as in most courses on statistics, because the data are usually at multiple levels and with unequal numbers at each subdivision of the data. The most important methods are log-linear and logistic models and the newer multilevel modelling. These methods can support the analysts’ decisions on the complexity of tables to produce. Both the more complex methods and those in this guide are equally applicable to cross-sectional surveys, such as baseline studies, and longitudinal surveys. The latter are often needed for impact assessment. Details of the design and analysis of baseline surveys and those specifically for impact assessment must await another guide! 26  © SSC 2001 – Approaches to the Analysis of Survey Data  © SSC 2001 – Approaches to the Analysis of Survey Data 27 The Statistical Services Centre is attached to the Department of Applied Statistics at The University of Reading, UK, and undertakes training and consultancy work on a non-profit-making basis for clients outside the University. These statistical guides were originally written as part of a contract with DFID to give guidance to research and support staff working on DFID Natural Resources projects. The available titles are listed below. †¢ Statistical Guidelines for Natural Resources Projects †¢ On-Farm Trials – Some Biometric Guidelines †¢ Data Management Guidelines for Experimental Projects †¢ Guidelines for Planning Effective Surveys †¢ Project Data Archiving – Lessons from a Case Study †¢ Informative Presentation of Tables, Graphs and Statistics †¢ Concepts Underlying the Design of Experiments †¢ One Animal per Farm? †¢ Disciplined Use of Spreadsheets for Data Entry †¢ The Role of a Database Package for Research Projects †¢ Excel for Statistics: Tips and Warnings †¢ The Statistical Background to ANOVA †¢ Moving on from MSTAT (to Genstat) †¢ Some Basic Ideas of Sampling †¢ Modern Methods of Analysis †¢ Confidence Significance: Key Concepts of Inferential Statistics †¢ Modern Approaches to the Analysis of Experimental Data †¢ Approaches to the Analysis of Survey Data †¢ Mixed Models and Multilevel Data Structures in Agriculture The guides are available in both printed and computer-readable form. For copies or for further information about the SSC, please use the contact details given below. Statistical Services Centre, The University of Reading P.O. Box 240, Reading, RG6 6FN United Kingdom tel: SSC Administration +44 118 931 8025 fax: +44 118 975 3169 e-mail: [emailprotected] web: http://www.reading.ac.uk/ssc/

Sunday, July 21, 2019

Review of How We Do Harm by Otis Brawley and Paul Goldberger

Review of How We Do Harm by Otis Brawley and Paul Goldberger In America, there is an underlying assumption that medical professionals placing their patients care above all else. We believe that our physicians follow important concepts such as the principle of beneficence, and the principle of nonmaleficence. Yet in the book How We Do Harm, Brawley introduces his readers to the back rooms and the unknown conversations of those in who are a part of the medical profession. This insiders perspective is Brawley’s (and his co-author) real genius, the ways he makes makes what could otherwise be an esoteric and dense topic become an enjoyable book. Though a joke about how important money is to the american medical system A Wallet Biopsy, its one that is important because if you can afford the best care,get the best care. If however, you cannot get good care, you get the a bare minimum of care. Yet Brawley does not make this a purely socialist critique. In many ways, Brawley acknowledges that wealth can cause its own problems in America. Patients with sufficient wealth often demand treatment that borders on irrational, and for those with means finding a doctor willing to satisfy their concerns is not hard. The reason for this treatment seeking behavior, is that the American healthcare system is not design to prevent elective treatments which are not only not necessary but also often quite expensive. Yet the doctors are not dupes and the reason why there is always a doctor willing to give into to the demands of patients providing they can pay the costs. Yet at the same time Brawley is not seeking to place physicians on a pedestal who can do no harm. Brawley introduces us to two different but equally heartbreaking cases Helen and Lillia. Helen’s story is a used has an exemplar of the â€Å"more is better† philosophy Brawley sees as endemic in the medical community. After having a mastectomy lump, her oncologist She was â€Å"offered† post surgical chemotherapy. Her oncologist explained that a stronger dose is better than a weaker dose. â€Å"More is better† notes Brawley had been the fallback strategy for oncology since the 1950s with the general opinion being that , more chemotherapy translated to greater the more effectiveness. Yet for Brawley this approach is not the real tragedy of Helens story is that she was recommended a autologous bone marrow transplantation since her insurance company will pay for more of the costs of the transplant and chemotherapy (page 32). As a result of this treatment, Helen experienced far more severe complications than expected and ultimately these complications kept her in the hospital for five months and is then transferred to a rehabilitation hospital. Taken altogether this â€Å"recommended† procedure cost her a year of her life. Then three years later, Helen discovers that this painful experience had no demonstrable effect in improving survival. When Helen asked her oncologist , her oncologist responds â€Å"this was what everybody was doing at the time.† Brawley does points out that Helen is not alone as between 1989 and 2001 at least 23000 women might ha In the second case, Lilla Romeo was first diagnosed with breast cancer (Stage 1) in 1995. She had surgery followed by radiation. Five years after the initial diagnosis, a routine scan (how many scans did she have in the 5 years?) showed the disease had returned. The doctors told her that â€Å"the prognosis turned grim†¦the cancer was incurable, and the goal of treatment was to delay the inevitable.† So Lilla was persuaded, and started non-stop chemotherapy (page 71). In 2003, Lilla remembered an oncology nurse at the New York University Medical asked if she was feeling tired and with a hemoglobin reading just under ten, she was â€Å"suggested and offered† cancer-fatigue drugs (at that time, the popular one was Procrit by JJ) In 2004, she was told that the hospital had switched from Procrit to another drug, Aranesp (manufactured by AMGEN), which caused a burning sensation under her stain at the injection site (page 79). In 2010, when she requested copies of her medical records from the doctors who had treated her, Lilla learned that she had received a lot more Procrit and Aranesp than she knew. Her first dose was administered on 1/11/2001 and then almost weekly thereafter. Altogether, she was given 221 1/2 doses. When Lilla was started on the hemoglobin-building drugs (also known as ESAs), little did she know that the drug companies manufactured a medical condition: cancer fatigue. She also had no idea that â€Å"her infusion was the front-row seat for observing a spectacular, indeed, cataclysmic, failure in medicine.† Dr Brawley strongly believed that these drugs have shortened Lilla’s life. She died on June 9, 2010 at the age of 63 (Just before her death, Lilla was suggested and given â€Å"Avastin†!!) Finally, there is the case of Ralph DeAngelo, who was prescribed aggressive prostate cancer treatment after a positive prostatic specific antigen (PSA) screening. Unfortunately for Ralph DeAngelo, PSA screening has lead to financial gain to many medical businesses, but Ralph DeAngelo ended-up incontinent, sexually impotent, and with a rectal fistula into the bladder (Brawley Goldberg, 2011, pp.215-230). It was very eye-opening to find out that physicians may sometimes prescribe experimental drugs to patients with little or no detailed information or informed consent about the potential side effects and the eventual lack of thorough trial studies on a given drug. It was also sad to be reminded that the American healthcare system lacks good use of its resources. This is particularly true for the â€Å"working-poors† who lack proper access to healthcare. By the time they qualify for Medicaid, they are so sick that their situation is so dire that it’s likely to have a negative outcome. Finally, it was disconcerting to find out that physicians may prescribe some expensive and potentially harmful screenings to patients for their sole financial gains. Part III: Corroboration. Studies have shown that despite the advances made in the war against cancer, there are many disparities in the delivery of care, based on factors such as race, income, and geographic area. Many patients report problems such as lack of insurance, high co-payment for prescription drugs and transportation issues. In addition, African-Americans are more likely to be diagnosed of advance stage cancer than Caucasians (Schwaderer Itano, 2007). In addition, despites their widespread use, some screening tests such as the PSA have shown some limitations. About 75 percent of positive PSA test are false positive, which may be associated with psychological harm more than a year after the test. In addition, diagnostic testing and aggressive treatment of a non life threatening prostate cancer may result in adverse consequences such as erectile dysfunction, incontinence, and even patient’s death (Slatkoff, Gamboa, Zolotor, Mounsey Jones, 2011). Shortly after he turned 70, Mr. Ralph De Angelo, a retired department–store manager in the heart of black America, saw a newspaper advertisement that claimed that prostate cancer screening saves lives. The advertisement also mentioned that 95% of men diagnosed with localized disease are cured. The following is the tragic story of Mr. De Angelo after his prostate screening and how unnecessary harm can be done to those who go for screening of the prostate, breast, etc. This is a classic example of collateral damage (due to overtreatment) described in the book â€Å"HOW WE DO HARM† by Dr Otis Webb Brawley, MD a medical oncologist and Executive Vice President of the American Cancer Society. In 2005, Mr. De Angelo, after his prostate screening, was diagnosed with prostate cancer, with a PSA reading of 4.3 ng/ml (just 0.3 above is considered normal). He was urged to have a biopsy. Two of the 12 biopsies showed cancer. The Gleason score was 3 plus 3 which is associated with the most commonly diagnosed and most commonly treated form of prostate cancer. There is no way to know whether a patient with this diagnosis will develop metastatic disease or live a normal life unaffected by the disease. With this uncertainty, Mr. De Angelo was persuaded by his urologist to perform a radical robotic prostatectomy which he (the urologist) thought was the gold standard of care. After the operation, he was told he had a small tumour 5mm by 5mm x 6mm in a moderate size (50cc) prostate. The tumour was all in the right side of the prostate. This means that the tumor didn’t appear highly aggressive under the microscope. Good news? Unfortunately, Ralph realized that he was then incontinent. Three months later, the incontinence was still there and he had to wear pampers continuously. Besides incontinence, Ralph was also impotent and given Viagra. With a lingering 0.95 ng/ml (even though his prostate has been removed), a radiation oncologist suggested â€Å"salvage radiation therapy† to the pelvis. Four weeks into the radiation, Ralph saw blood in his stool. This was due to radiation proctitis, i.e. radiation damage to the rectum. He continued having incontinence, but also developed a burning sensation upon urination. Later, Mr. De Angelo stopped his radiation with one more week to go. For the rectal proctitis, he went to a gastroenterologist, who prescribed steroids in rectal foam that he had to put up his rectum four times a day. About three weeks after stopping the radiation, Mr. De Angelo realised that whenever he passed gas, some of it came out of his urethra. He also sensed liquid from his rectum soiling his diapers. He was confirmed having a rectal fistula into the bladder†¦there was a hole between Ralph’s rectum and his bladder. After several urinary infections and when the fistula didn’t seem to be healing, he had to see a GI surgeon. He performed a colostomy to keep stool off the inflamed rectum and the hole into the bladder. The next step was an ureterostomy, a surgery that will bring urine to abdominal wall and collect it in a bag, just like his bowel movements. In Dec 2009, Mr. De Angelo’s daughter called Dr Brawley to inform that her father had â€Å"urinary tract infection† which later progressed to sepsis, a widespread bacterial infection in the blood. On the fifth day of hospitalisation, Ralph passed away (only 4 years from diagnosis). Interestingly†¦Ã¢â‚¬ the death certificate reads that death was caused by a urinary tract infection. It doesn’t mention that the urinary tract infection was due to his prostate-cancer treatment and a radiation-induced fistula†¦.Mr. De Angelo’s death will not be considered a death due to prostate cancer, even though his death was caused by the cure. In conclusion, Dr Brawley strongly believed that†¦ â€Å"the majority of these men, who are treated with radiation or hormones or both, got no benefit from treatment. They get only the side effects including those that Mr. De Angelo had: proctitis i.e. inflammation and bleeding from the rectum, cystitis, burning sensation on urination and a feeling of urgency, a rectal fistula in which bowels and bladder are connected. The side effects of hormones can be diabetes, cardiac diseases, osteoporosis, and muscle loss. In the case of Mr. Ralph De Angelo, both the surgeon and the radiation oncologist got paid handsomely. They both likely thought they were doing the right thing. However, Ralph got the side effects, and his quality of life was destroyed (too much collateral damage?). One parting remark by Dr Otis Webb Brawley which is very relevant to this article: â€Å"Prostate-cancer screening and aggressive treatment may save lives, but it definitely sells adult diapers â€Å" Part IV: Practice Application. After reading How We Do Harm, I think that it would affect my nursing practice in many ways. As healthcare professional, this book reminds me of the importance of staying current with my nursing knowledge through continuing education, and reading resources from organizations such as the Mayo Clinic, the American Nurses’ Association and the Center for Diseases Control and Prevention. In addition, as patient advocate, this book reinforces my desire to empower all patients under my care, so that they may be active partners in their healthcare. More than ever, I will encourage my patients to educate themselves on their diseases; in addition, I will encourage my patients to learn more about all treatments options so that they may make the best choice for their healthcare. How American Medicine Does Harm To Patients With powerful incentives set in motion, many hospitals and oncology practices in the US instructed nurses to ask leading questions about â€Å"fatigue† with the intent of expanding sales to a growing number of patients and upping the dosage to each patient. This is referred to as â€Å"an ESA treatment opportunity† (ESA means erythropoiesis-stimulating agents, drugs used to overcome fatigue, low blood counts). (page 85). To increase their earnings, drug companies and doctors set out on a search for treatment opportunities, often forgetting about the sacred trust between doctors and patients (page 85). The exact magnitude of harm is harder to gauge†¦most of the money was spent on drugs (e.g. ESAs) that were prescribed for the wrong reasons and under false, manufactured pretences. These drugs were not used to cure disease or make patients feel better. They were used to make money for doctors and pharmaceutical companies at the expense of patients, insurance companies†¦the technical term for this is overtreatment and overtreatment equals harm (page 97). Doctors do some horrible, irrational things under the guise of seeking to benefit patients†¦.For example offering a bone marrow transplant for a breast cancer patient, prophylactic doses of ESA drugs†¦these are only a few examples. The system rewards us for selling our goods and services, and we play the game (page 122). You don’t deviate from the science. You don’t make it up as you are going along. You have to have a reason to give the drugs you are giving. You have to tell the patients the truth (page 145). Commenting further on ESA drugs, some doctors didn’t bother to check what the patient’s haemoglobin was and erred on the side of giving the ESA every time they give chemotherapy. Doctors routinely prescribed the drugs for uses, in which it had not been studied-such as anaemia caused by cancer itself, as opposed to anaemia caused by chemotherapy (page 78). †¦.Doctors try out things just to see whether they will work (page 160) Earlier in the book (page 29), Dr Brawley mentioned that â€Å"A hospital was the place where they withheld treatment or where they tried things on you without telling you what they were doing and why (page 29/30). When a drug succeeds in controlling cancer, we learn about it at conferences and in scientific journals. Stories of our fiascos, though no less instructive, are almost invisible, especially if there are cautionary tales that lay bare the fundamental flaws in the system (page 157). Cancer is hard to understand, and yet doctors rush patients (page 182). Survival measures time that elapses after diagnosis. By diagnosing a cancer earlier, survival rates are increased. The more you diagnose, the more you push up survival (page 193). Somewhere along the way, we have been conditioned to believe that a new treatment is always better (page 197) A new drug must be better than the old. A new medical device must also be better (page 202). Inappropriate use of certain drugs can be attributed to the profit motive. A recent study of prescribing pattern demonstrated that as soon as the profit motive weakened, inappropriate prescribing of these drugs dropped (page 197). The overuse of radiologic imaging is a major problem†¦..†up to one-third of radiologic imaging tests are unnecessary. This is a serious problem, not just because these tests are expensive, but because they expose the patient to radiation that can cause cancer. Some have estimated that 1% of cancers in the United States are caused by radiation from medical imaging† (page 202). Even when administered properly, cancer drugs can bring the patient to the brink of death. An overdose can easily push him off the cliff (page 279). Much of the money currently spent on healthcare (in the US) is money wasted on unnecessary and harmful, sick care. Even for the sick, a lot of necessary care is not given at the appropriate time. The result is more expensive care given later (page 281). The medical profession frequently allows bad doctors to continue to practice. The profession doesn’t police itself. Chalk it all up to apathy. Or ignorance (page 282). Many physicians are ignorant of some aspects of the field of medicine in which they practice. They tend to think the newer pill or newer treatment must be better because it is new. Ignorance is a failure to think deeply. It is a failure to be inquisitive. It is a failure to keep an open mind (page 282). Dr Brawley’s most direct critique of our healthcare system â€Å".†America does not have a health-care system. We have a sick-care system†.