Saturday, August 31, 2019

Epiphany

Who I Am As a child I grew up telling myself and everyone else that I never wanted to get married and have children. I watched my mother get married and divorced twice and seen what kind of pain that inflicted on her and us kids. I thought that I would be a better person if I stayed single and didn't have any kids to worry about. Of course I fell in love early in adulthood and decided to have children. A few years later my fear f becoming like my mother in the aspect of marriage, divorce and having kids came alive.I felt like such a fool for allowing that to happen to me. Usually by the time I get home from work and picking up the kids, It Is late and I do not feel like taking the time to actually cook a meal. One evening we got home earlier than we usually do so I decided to fix dinner, sit down and actually eat as a family. I can remember myself standing In front of the stove thinking of the frustrating long day of work I had, getting aggravated because the kids were running around the souse.The kids were playing and being loud which Is what a 4 year old boy and 4 year old girl would do. Then It suddenly becomes quiet and Patrick comes to me and says â€Å"Mommy, you know what? † I say with an annoyed tone of voice â€Å"What Patrick? † He says â€Å"your superman. † I picked him up and gave him a big hug. In that moment I realized that out of all that has happened to me In my life, I am truly grateful that I have my children and that I am actually a better person because God brought them Into my life.

Human DNA and Sexual Differentiation Essay

There has with respect to understanding human evolution, thus far, hardly been any greater an academic marriage than that which has occurred between physical anthropology and genetics. For anthropologists the union has been particularly beneficial as DNA has been incorporated into the quest to understand human evolution. Some scholars have referred to this as the culmination of the evolution of the once distinct fields represented symbolically by Darwin’s theories on evolution and Mendel’s speculation regarding genes; one scholar has opined that Darwin and Mendel are the core, the essentials of understanding. These basics work together.  The gene pool — the hereditary property of a population of animals — maintains the variation of the population or species, and mutation tends to increase that variation. Darwin’s selection cuts back the less favorable variation, in that way sculpting the inheritance of the species. (Howells 8) Fossils and genes, taken together, illuminate in ways that one without the other simply cannot. This refers to the discovery of positive knowledge as well as the discovery of long-established fallacies in the field of physical anthropology (Marks 131). This essay will focus on a few types of positive knowledge regarding the evolution of human DNA. More specifically, this essay will discuss how DNA variation can be used to explain some of the evolutionary physical features for sexual differences in humans as they pertain to language, sexuality, and visual spatial skills. As a preliminary matter, it is important to acknowledge that human sex differences were not always as pronounced as they are today. There were genetic variations that occurred over a long period of time and these genetic differences are evident in the fossils used by physical anthropologists to piece together how and why DNA has evolved as it has over the course of time. Scholars seem to agree that the evolution of human DNA is unique in certain respects; for purposes of this essay, it is significant to note that, regarding sexual differences in species, â€Å"It is apparent that these same cross-species sex differences have become more pronounced in humans† (Joseph 35). The evolution of human DNA with respect to sexual differences is greater than has been found in studies of other species. It has been demonstrated that DNA evolution led to Homo erectus females experiencing a vaginal reorientation at the same time that males experienced a change in pelvic structure (Joseph 35). The consequences were tremendous as this likely resulted in the development of long-term relationships between males and females; this is because, rather than being dependent on estrus in order to get pregnant, females were now physically and genetically configured to be sexually receptive continuously rather than sporadically. These long-term relationships also seem to have coincided with males and females establishing more permanent or semi-permanent homes. It can be argued, to some degree at least, that this genetic variation led to an embryonic notion of marriage and home. These human sex differences were further accelerated with the genetic evolution of the brain; indeed, as the brain became larger, â€Å"this required a larger birth canal and an increase in the sexual physical differentiation in the size and width of the H. erectus† (Joseph 35). DNA varied to accommodate these changes and they are manifest even today in the way that women walk as well as in the more fragile nature of their pelvic bones when compared to their male counterparts. As the female was evolving there were practical consequences; for instance, â€Å"The transformation of the human female hips and pelvis, however, also limited her ability to run and maneuver in space, at least, compared to most males† (Joseph 35). These DNA variations thus functioned to separate males and females and to lay the physical groundwork for other changes. This evolution in human DNA, in turn, led to a division of labor predicated on these newly exaggerated differences between the sexes. Generally speaking, women became gatherers and men became hunters. Each of these roles demanded different types of skills and the human animal adapted through the mechanism of its DNA. The female role demanded careful language skills rather than violence whereas the male role demanded aggression and physical strength. In explaining how the male DNA evolved to adapt to the male’s developing function, one scholar has noted that â€Å"successful hunting requires prolonged silence, excellent visual-spatial and gross motor skills, and the capacity to endure long treks in the pursuit of prey. These are abilities at which males excel, including modern H. apiens† (Joseph 35). In short, many of the human sexual differences noted today can be traced to the ways in which human DNA has evolved over time in order to adapt to changed environments and to changed sex roles. In the final analysis, even a cursory examination of the history of the evolution of human DNA suggests rather persuasively that there are watershed events which can aid in underst anding the uniqueness of sexual differentiation in humans and how sex roles evolved in response to that sexual differentiation.

Friday, August 30, 2019

Role of Nature in the Evolution of the Modern Cities

3.0 LITERATURE REVIEW My thesis aims to research ; the importance of nature to an urbanite life the fast gait yet numbingly everyday life in this concrete jungle. There is no 1 definition to the relationship of adult male & A ; nature in the urban context of a metropolis and requires a multi-fold geographic expedition to get at any decision. My geographic expedition begins with a survey of the history and development of urban landscape vs. natural landscape in metropoliss. Followed by, research on the effectivity of bing agreements of the green alleviation pockets found in the metropolis and their relationship with urbanism in the metropolis. This forms the footing of research for future propositions made by critics and professionals, taking to any remarks that can be made on the relevancy of betterment and changes of the urban morphology. Through this layered researched, I aim to better understand the urban morphology in visible radiation of integrating of natural alleviation infinites into the urban lan dscape and its impact on the urbanites and their societal behaviors. 3.1 Role of Nature in the Development of the Modern Cities In the modern epoch of development ( 19Thursdayto 20Thursdaycentury ) , the growing of urbanisation [ 1 ] and the modern metropoliss has been a really rapid procedure. Contrary to the past where human homes have peacefully coexisted with nature [ 2 ] ( Refer to Figure 1 ) , late there has been a alteration of form. The new architectural layout of the human colonies is a web of cold concrete jungles with small concern for the function of nature in the urban landscape. Modern metropoliss came as an reply to the population growing after the industrial revolution [ 3 ] . Cities grew larger ; became the back bone of the economic system and following the motion of modernism, [ 4 ] came the changes in the life style of urban inhabitants. Exponential growing of building of high – rise edifices, modern places etc. replaced and destroyed the natural landscape, paving manner for more steel and concrete constitutions. This was the age of ‘man over nature’ [ 5 ] , where urban contrivers [ 6 ] followed the doctrine of generic forms, with no attending to localized environments and natural landscapes. Nature was a ductile entity, carved, flattened, relocated and unnaturally recreated to suit the demands of the built created by adult male. [ 7 ] Therefore, the construct of green alleviation infinites and the importance of natural landscape is either ; merely non considered, or an reconsideration, treated as sheer ornamentation to the edifices. Leaving the metropoliss, which house the larger Numberss of population [ 8 ] , with nil more than intimations of green infinites ; doing adult male to lose all connexions to his beginnings, i.e. nature, ‘ [ †¦ ] there were few who believed in the importance of nature in a man’s universe, few who would plan with nature’ [ 9 ] Karachi faired non really different from this general description of modern metropoliss. Furthermore, being the largest gross manufacturer and biggest of the few metropolitan metropoliss of Pakistan, it entertains a high inflow of rural-urban migration. [ 10 ] In order to suit the rampant enlargement in Numberss the metropolis is turning beyond bounds ( Figure 2 ) and destructing environing natural landscape in the procedure. [ 11 ] These surveies of the context of natural landscape within the urban landscape take me to research of how this current composing of the urban landscape impacts its user. 2.2 Urbanism ; Between the Urbanite and the Urban Landscape The first text under treatment ‘A Game on the Urban Experience and Limits of Perception’ , [ 12 ]apaper that uses the word drama to ‘ [ †¦ ] interpret the thought of sociableness and sensibility’ , [ 13 ] and foreground the ability of architecture to restrict human perceptual [ 14 ] interaction.It touches upon assorted subjects under the class of urban infinites of metropoliss, their architecture and their influence on people. The characteristic matching to my peculiar field of survey is the effort to understand how the architectural composing impacts the mundane life of the urban inhabitant. The research proposes usage of, new mapping techniques of Psychogeography [ 15 ] in the homesteader colony of Istanbul ( Pinar Mahalle ) , as they reflect the, ‘ [ †¦ ] Personal paths, finds, psychological distances, and looks [ †¦ ] ’ [ 16 ] of the participant under observation. This brought Forth two chief countries of focal point ; the eve ryday rhythm of mundane life experiences and the limited ‘multi-sensory perceptual experience in urban experience’ [ 17 ]Psychogeography, the hit of psychological science and geographics [ 18 ] is used as the method of resuscitating the urban experience of mundane life, in a mode that it arouses a sense of gaiety and consciousness within the participants, i.e. the users of the infinite. This playful enthusiasm gives manner to the, ‘Theory of Drive’ [ 19 ] which tests the geographical bounds restricting perceptual experience. [ 20 ] The dimensions of the boundaries of, ‘ [ †¦ ] societal attractive forces and emotional zones of the urban geography’ [ 21 ] need to be recognized so they may be extended to suit the participants.One dominant subject that stands out in the paper is the demand for intercession or adaptation of bing urban infinites to make more than merely a ocular experience, ‘Instead of mere vision, or the five classical se nses, architecture involves several kingdoms of centripetal experience which interact and fuse into each other.’ [ 22 ] This ability of architecture demands to be explored and integrated in design at the urban degree so within these crowded metropoliss some degree of interaction and familiarity may be developed.However, if these steps are non taken, people will stay stuck in a rut, detached from one another, losing out on common benefits and compromising on a complete multi -sensory perceptual experience of infinites.The 2nd short coming of the urban landscape highlighted by this paper is the cold, dead composing of the environment. The design format and layout is everyday, humdrum and lacks any signifier of alleviation infinite, ocular or physical. Therefore, the desperate demand of alteration in the bing format of these metropoliss is made apparent.Findingss of this paper are restricting in footings of contextual relevancy, nevertheless, twosome of statements discussed supr a are non far from the truth of Karachi’s cityscape. Furthermore, the methods employed for research can be carried frontward as portion of primary research techniques [ 23 ] .The paper besides highlights the function of architectural design and layout of the metropolis as a nucleus participant in the game, specifying the life style of the participants. Baig [ 24 ] , supports this statement by stating ; ‘It is non people entirely who generate the city’s ethos ; instead the inanimate objects, such as the urban landscape, besides contribute towards organizing the urban spirit.’ [ 25 ] The, ‘urban mizaaj’ ( i.e. urban landscape ) is dependent on the chances of life styles presented to the people by the, ‘inanimate objects’ [ 26 ] around them. The largest per centum of inanimate objects of any metropolis is edifices and their connexions i.e. architecture, thereby under the theory of Architectural Determinism, [ 27 ] built environment becomes the main dictator of societal behaviour and interactions. [ 28 ] After understanding the impact of the urban landscape on human life style, the following class efforts to research the relationship of the urbanite and the natural landscape ; in order to set up whether some of the spreads of the above discussed relationship can be filled through the add-on of natural landscape. 2.3 Relationship of the Urbanite and Nature As the modern metropoliss continue to come on towards a tech -savvy [ 29 ] hereafter the modern man’s isolation from nature continues. Our technophilia [ 30 ] and technophobia [ 31 ] , i.e. the love and fright of engineering thrusts us to want such a strong bid over engineering, that it becomes our slave. However, our increasing dependence on the technological promotions has reversed functions, and adult male has become a slave to engineering. Robert Thayer [ 32 ] , states that our love for engineering can be demonstrated by, ‘current residential landscape, dominated by house, private road and garage’ [ 33 ] along the broad roads built to promote the usage and easiness of cars. We so conceal behind a green facade and continue to populate through this heavy technological support system. [ 34 ] The consequence of this isolation is the happening of the term ‘solastalgia’ ; the hurting experienced when we withdraw from a natural topographic point we love and cherish [ 35 ] .Louv, in his books further argues the demand for interaction between adult male & A ; natural landscape and the effects of deficiency of this interaction. In his first book, ‘ Last Child in the Woods ’ [ 36 ] , he put frontward the disadvantages on the development of kids due to miss of exposure to, ‘Vitamin N’ ( N – Nature ) [ 37 ] , doing a syndrome of ‘Nature Deficit Disorder’ [ 38 ] . This is non a medical diagnosing but it is used to make consciousness of the damaging effects of this divide. These theories stemmed many out-of-door category room plans and incorporation of interaction with nature for kids has now become a more popular thought. [ 39 ] However, the impact of the book had a far more reaching impact than merely the restructuring or new experimental techniques of instruction ; it besides stimulated the nostalgia of many grownups. Adults either reminisced the memories of a different childhood, from that of their kids or related to the symptoms of the disaffection from nature. He farther supports his statement with simple illustrations such as, â€Å"Depressed people who were prescribed day-to-day out-of-door walks improved their tempers compared to patients walking in a promenade. Alzheimer patients exposed to natural light fluctuations experienced less agitation and wandering.† [ 40 ] The lack that Louv discusses in his plants highlights the importance of ‘Vitamin N’ , to heighten our physical and mental wellness. This construct can now be tied back to the treatment in the old subdivision of relationship between urbanites and the urban landscape. The defects in the urban landscape are holding a damaging consequence on the metropolis inhabitants and can be countered with the integrating of the natural landscape in the cityscape. Testing this statement farther, the following subdivision entails a survey of the connexions lost between adult male, nature and metropoliss ; if there is a demand to reconnect and how these connexions possibly made? 2.4 Man and Nature within the Urban LandscapeMy following text, ‘Design with Nature’ , [ 41 ] begins with a comparing of the metropolis and the countryside and the blunt differences between the two. When exhausted with the over overpowering metropolis one retreats to the soothing state side. However, every bit much as urbanites crave the alleviation found in the countryside they need the metropolis, whether for irresistible impulse of work or to carry through the demand to be portion of the fast gait life, therefore, they are drawn back to it. This reflects the divide in the feelings of adult male, torn between the roads taking to metropolis and countryside, coining the question of the writer of this book,‘It is my probe into a design with nature: the topographic point of nature in a adult male ‘s universe [ †¦ ] ’ [ 42 ]The writer writes from personal experience of holding grown up in the industrial old ages of Glasgow and foreground the pros and cons of the metropolis vs. the countryside. From the beginning, the book distinguishes the two poles ; nature vs. built, with adult male caught in the center. This brings frontward a really of import field of idea, â€Å" [ †¦ ] if we can make the humane metropolis, instead than the metropolis of bondage to labor, the pick of metropolis or countryside will be between to excellences, each indispensable, each different, both complementary, both life – enhancing, adult male in nature.† [ 43 ] This extract highlights the machinelike, cold character of a metropolis discussed in the first portion of this research and how an flight to the countryside is simply a patch solution. Therefore, it proves the demand of integrating of landscape within the urban context of the metropolis.Ian L. McHarg [ 44 ] categorizes the metropolis and landscape architecture into multiple chapters, giving a elaborate design methodological analysis of integrating nature in urban planning, its application and its demand for execution ; by exposing the connexions adult male finds within nature. Within these the more outstanding subdivision is of ‘The City ; Process and Form’ [ 45 ] , where the writer explores the relationship of the built environment with nature and how when the two are paired together they do non compromise their possible but instead heighten it. He speaks about how the morphology of human colonies should be moulded along the natural morphology. For illustration, when guidelines for step paces can be defined, there should be regulations against edifice on inundation fields. [ 46 ]‘We are going a land of great metropoliss. Villages are stationary or withdrawing ; metropoliss are tremendously increasing [ †¦ ] ’ [ 47 ]Similar to McHarg’s ideas on, ‘city of bondage to labor, the pick of metropolis or countryside’ [ 48 ] , Ebenezer Howard [ 49 ] at the beginning of his book,Garden Cities of To-morrow[ 50 ],ne gotiations about two magnets, the town and the state but in his analysis he proposed a simple remedy, ‘Human society and the beauty of nature are meant to be enjoyed together, the two magnets must be made one’ [ 51 ] . Therefore, ensuing in the 3rd magnet the ‘Town – Country’ [ 52 ]Garden Cities of To-morrowgoes on to giving theoretical account programs ( Figure 4 ) and inside informations for a feasible system of town- state that developed with a cardinal park at its bosom. These thoughts and proposals were put away with the purpose to unite the best of both universes, bridging the spread of the rural with the industrial metropolis. [ 53 ]Critics consider Howard’s proposed system a instead Utopian solution to urban jobs, however, while the programs proposed may non be ideal, the thoughts can still be translated into new derivations.Bringing the research closer to place, to the metropolis of Karachi, research work refering unfastened green inf inites, vicinity Parkss, nature belts etc. is being done.‘Urban Open Green Spaces are an of import agent lending non merely to the sustainable development of metropoliss but are considered as one of the most critical constituents in keeping and heightening the quality of life particularly of urban communities’ [ 54 ]Muhammad Mashahid Anwar in his paper, ‘Recreational Opportunities and Services from Ecosystem Services Generated by Public Parks in Megacity Karachi-Pakistan’ [ 55 ] sheds an interesting visible radiation on people’s perceptual experience and positions on the assorted public green infinites of Karachi. Anwar carried out a study, with audiences of two changing income groups and vicinities, Defense Housing Authority and Gulberg Housing town. Consequences showed people’s purpose to utilize green public infinites, their willingness to pay if it ensures a clean good maintained environment and the most popular use of these public Parkss to be, nature grasp, light exercising such as walking and relaxation. The overall study proves people’s cognition about the topic and their concern for it, as bulk recognized its advantages of lower air temperatures, counter to air pollution, aesthetic sweetening, recreational end product etc. [ 56 ]The above texts study the urban scenes of metropoliss and the function of nature or the deficiency of nature in these metropoliss. Psychogeography aid find boundaries of sociableness of infinites and multi-sensory experience while ‘Design with Nature’ [ 57 ] and ‘Garden Cities of To-morrow’ [ 58 ] high spots the demand of the multi-sensory experience to feed off nature. Therefore, an convergence of these multiple beds can set forth a image of how Karachi’s urban signifier can integrate ‘nature’ intercessions, by redefining the urban landscape composing.

Thursday, August 29, 2019

Ethics Critique Essay Example | Topics and Well Written Essays - 1000 words - 1

Ethics Critique - Essay Example According to psychological research, moral judgments are shaped by the human mind and behavior (Ross, et al.345). On the same note, moral judgments are influenced by what a person perceives to be right or wrong. In this respect, the issues of ethics moral, norms, and ethics emerge. These three issues vary from one person to another for differentiated reasons, among them individual growth and development, cultural effects, and the impact of the society on an individual. Therefore, based on the work of the mind and the underlying human behavior, a person can make moral judgments that do not necessarily match those that might be made by another person. Psychological research essentially explains how human beings make moral judgments, based on the human mind and behavior (Ross, et al.358). The right or wrong factor at an individual level is accounted for, alongside virtues and ethics that are also based on the human mind and behavior. The link between all the aforementioned variables can help in explaining the thoughts, judgments, or actions that an individual, or society for that matter, undertakes regarding any given situation or condition. In this respect, judgments or actions by human beings can be justified through psychological research. In understanding how human beings act, feel, and think prior to making moral judgments, psychological research factors in a number of variables that influence the whole process. To start with, human beings must be aware of some given form of morals in order to enable them make moral judgments. In other words, they must be in a good position to distinguish between right and wrong. This aspect is shaped by the environment, behavior, culture, and society among other variables. Once the human being is potentially in a position to differentiate right and wrong, the issue of moral

Wednesday, August 28, 2019

Types of Accounting Systems Term Paper Example | Topics and Well Written Essays - 1000 words

Types of Accounting Systems - Term Paper Example â€Å"Under cash based accounting revenue is recorded when cash is received, and expensed is recorded when cash if paid† (Weygant, et. al. 2002, pg.89). The use cash based accounting is suitable for small businesses that deal primary in cash such as a hot dog vendor or a pizza cart. The use of cash based accounting is not in compliance with the generally accepted accounting principles, thus public companies cannot utilize this method of accounting because it would violate GAAP and SEC mandates. It is easier to implement a cash based accounting when the firm does not have account receivables or account payables. It is possible for accountants to convert a system from a cash basis accounting to an accrued basis accounting. The process is time consuming due to the fact that the accountant must use a lot of adjusting entries. The users of financial statement or stakeholders require precise and accurate financial statements that are free of fraud and materials errors. The major sta keholders groups that use often use financial information of companies to make decisions include the employees, lenders, shareholders, board of directors, suppliers, managerial staff, governmental institutions, and the community. The employees need information regarding the financial activity of the company they work for to provide them with security that the company is aligned with the going concern principle. The lenders evaluate the financial statements of companies to determine whether to lend them money or not. Banks and others rely on the accuracy of the financial statements to make decisions worth thousands or millions of dollars. Suppliers often extend credit lines to corporate customers based on their evaluation of the financial performance of an enterprise. The general public expects corporations to act in a socially responsible manner at all times. The shareholders make buy and sell decisions based on the results of the financial statements. Wall Street would collapse if investors stop believing in the accuracy of financial statements. Back at the turn of the century a series of financial scandals caused investors in the US to lose confidence in the accuracy of financial statements released by public companies. The US Congress reacted by passing the Sarbanes-Oxley Act of 2002. The Sarbanes-Oxley Act raised the consumer confidence, overall accountability, accuracy, and it imposed severe penalties for white collar crimes. Executive managers such as CEOs found of fraudulent financial activity can receive penalties of up to 20 years in prison. The CEO now has to sign the financial statements prior to being release to certify that they are free of fraud and material error. Accountants utilized a concept knows as depreciation to reflect the loss in value of an equipment or machinery as time passes. The most common depreciation method used by accountants in the United States is straight line depreciation. Straight line depreciation is calculated by diving price minus salvage value by lifetime in years (price – salvage value) / (years). Depreciation helps adjust the value of an asset. Companies that depreciate its assets receive a tax benefit because depreciation is categorized as an expense the lowers the net earning of the company. Three additions depreciation methods are LIFO, FIFO, and weighted average. The MACRS depreciation method is one of the best methods to reduce taxes in the short

Tuesday, August 27, 2019

Service Industry in Context - T-Mobile Research Paper

Service Industry in Context - T-Mobile - Research Paper Example As of the recent times, the company has generated revenue of around 14.8 billion by the process of handling its operations with an employee strength of over 34,500 employees (Deutsche Telecom, 2012, p. 18). The company’s planning and strategy based operations are initiated from the US based headquarters which is located in the Bellevue area in Washington. Talking in terms of customer statistics, it can be said that the company offers it product and service offerings to around 33.3 million customers in the US, by using technology platforms like the GSM, UMTS etc (t- mobile.com – b, 2012). The well known and highly reputed global telecom company has a wide array of products and services that appeal to customers around the world. Talking in regards to the product portfolio offered by the globally reputed telecom company, the company manufactures and markets telecommunication devices of latest technology like the Smart phones, Windows phones as well as the smart phones of v arious well known global mobile companies. The company’s product offerings also comprises of various other technological devices like the tablets, headsets, mobile chargers etc (t- mobile.com – c, 2012). In the US, the company is a national level service provider whose service offerings for the US market comprises of various essential and useful telecommunication services like voice, wireless messaging as well as high speed data service. (t-mobile.com – d, 2012). The company’s service portfolio comprises of data communication plans for mobiles and computers, which are highly segmented to suit the individual needs of the customers on the basis of their consumption usage. The company also provides high speed data connectivity services like the Broadband services as well as 4G services to the customers located in the United States. In an attempt to provide a significant amount of value to the customers, it can be said that the company has also focused on the process of eliminating the charges of the mobile devices with regards to the data plans and thereby providing and promoting more transparency in the pricing plans of their services to the customers. Talking in terms of financial performance of the company in the recent times, it can be said that the company generated a revenue total of around $4.9 billion in the sales of telecommunication equipments in the third quarter of this year, which is an increase of around 6.4% while calculated on a year on year basis. The company also recorded revenue of around $4.3 billion in terms of total service revenue for the third quarter of this year. Talking in line with the average revenue per user (ARPU), it can be said that the company generated $27.35, which is an increase of over 12% on a year on year basis. The average revenue per user recorded an increase because of the significant rise in monthly 4G subscriptions by the customers. The company apart from providing a stellar performance in t erms of revenue generation in the third quarter of this year has focused on the process of developing the customer base of the company in a move to achieve significant operational efficiency of the company. As a result, the company has recorded

Monday, August 26, 2019

Exam Paper Essay Example | Topics and Well Written Essays - 1250 words

Exam Paper - Essay Example semi-strong form of efficiency is a class of EMH which claims that all public information is calculated into a stock’s current price hence there is no way either fundamental or technical analysis is applicable to achieve superior gains. It claims only non-publicly available information can be used by investors to earn abnormal returns on their investments as all the other remaining information is accounted for in the prices of the stocks and no fundamental or technical analysis will result into above normal returns. Strong form efficiency is the strongest as the name suggests as it states all the information in a stock market irrespective of whether public or private. All such information is accounted for in the stock price and not even insider information could give an investor advantage hence profits exceeding normal returns cannot be made regardless of the amount of research or information available to the investors. If a company trades its shares in a stock market with a s emi-strong efficient market, the investors are likely to use the privately available information to make abnormal returns on their investment. Answer 2 The increase of the interest rates by the central bank results into loans from commercial banks being expensive as they also raise their interest rates to cover for the rise by the central bank. ... Therefore, one thing that has to happen is that the sales by Tintin will have to drastically reduce by a wide margin as the purchases of such goods by consumers will go down due to their escalated prices and a lack in their necessity. In addition, the long run is likely to see the bank adjusting their interest rates to accommodate the changes which results into an increase in the operation costs of Tintin due to increased rates of interest. The increased costs have an extended impact which translates into the reduced earnings of the firm, the economic situation at such times is volatile and the economic component which experiences this is the business people since they find it quite hard to balance between demand and supply. Answer 3 3. (a) There are a number of incomes which are not taxed or are not subjected to income tax. They include:- incomes realized by taxpayers to the extent of debts forgiven, payments from state sickness or disability funds, compensation received under the w orkers compensation act, interest earned from tax exempt municipal bonds, income from the sale of one’s primary residence whether it is sold on profit or at a loss. Others include:- incomes in form of life insurance money and non taxable gifts as a gift is exactly what it sounds like, fringe benefits from employers and child support funds as well as foster care payments. All these are not subject to income tax according to the law. 3. (b) Higher rate tax payers are subjected to a different tax rate brackets as compared to the lower rate tax payers. Therefore as the lower rate tax payers will be paying tax at 20%, the higher rate tax payers will pay the taxes at 40%. Therefore an investor who received a 90

Sunday, August 25, 2019

Monetary Policy of the Bank of England Essay Example | Topics and Well Written Essays - 1750 words

Monetary Policy of the Bank of England - Essay Example nflation, consumer price index is used which measures the changes in the prices of a fixed basket of goods and services and compare the new prices with the prices set in base year. The change therefore outlines as to how much inflation has emerged in the economy over the period of time. There are different price indices which can be used to measure the inflation however, consumer price index or CPI is widely used as a measure of inflation in the economy. Other indices include producers’ price index, commodity price index etc and these indices measure different aspects of price change over the give period of time in any economy. Inflation generally can be of two types i.e. cost push and demand pull inflation. Cost push inflation occurs when there is a decrease in the aggregate supply due to the increase in the wage rates as well as increase in the prices of the raw materials. These economic variables therefore can cause the aggregate supply to decrease thus pushing the prices o f the goods and services up and therefore increasing the inflation within the economy. Demand pull inflation can occur due to an increase in the aggregate demand and therefore can cause the price level to rise. This could occur mostly due to the increase in the aggregate money supply or the expansionary fiscal policies adapted by the government. Why Inflation Arises? Inflation also tends to occur when the overall aggregate demand for goods and services increases more rapidly than the increase in the aggregate supply of the goods and services. There can be different factors which can actually cause this imbalance between the aggregate supply and demand in the economy. The key reasons as to why this imbalance may occur can due to the increase in the consumption level, an increase in the investment... This essay outlines the detrimental effects of the high inflation for the growth of UK economy, and aims to determine optimal monetary policies for the Bank of England. Inflation is considered as a rise in the general price level in an economy over a given period of time. It therefore measures the rate of change of prices over a given period of time and indicates a percentage rate by which prices of goods and services have generally increased during the given period of time UK’s inflation rate has been recently soaring at high rate and there is a strong probability that the same can further increase in the future. At this time when economy is at a very fragile point, such higher level of inflation can actually discourage the consumers from spending and thus further putting pressures on the economy due to lack of demand. Over the period of time, Bank of England has taken measures to keep interest rates at really low levels in order to ensure that easy credit is available to consumers at relatively low rates. The idea was also to induce consumption in order to regenerate the demand and increase the economic activity. However, the continuation of this policy seems to have backfired because of the rapid increase in the inflation in the economy. The increase in the inflation rate has been mostly attributed to the expansionary monetary policy adapted by the Bank through quantitative easing as well as the reduction in the interest rates.The BoE must develop the reputation and credibility for its steps to reduce the inflation.

Saturday, August 24, 2019

Case study Example | Topics and Well Written Essays - 750 words - 3

Case Study Example This desk is responsible for giving out news reports, press release, handling media, approvals for advertisements, etc. The one window operation of interacting with media is what has been identified as a successful strategy by various marketing gurus. Consider the example of Barclays; the bank has a single media management window policy, whereby, the department is responsible for handling media related issues from press release to press conferences, from giving an employment ad in the newspaper to a product ad, everything from any department has to come to this media desk, and from there, it gets dispersed to the media. This not just ensures consistency of media management practices but also ensures that there is no misquotation of any management word in the media, since everything channelizes through this department, the statements prior to appearing in media are well modified to ensure that it complies with the given set of rules of the media desk. Another classical example is that of the FMCG firms like Unilever and P&G; if observed closely, it can be seen that the vacancy ads of these firms are highly standardized no matter which job it is for. Additionally, the product advertisements are also very standardized for the fact that the points that they should cover, the disclaimers, etc. This clearly indicates that the firm has a specialized advertising desk that is responsible for ensuring that certain particular ingredients are present in all ads that are given out by the respective firms. For any newer firm entering a particular business, it should be known media is a tremendous resource if utilized appropriately. Its utilization truly depends on how it is tackled by the firm. A business should establish a media desk whereby it is responsible for tackling all media affairs. As mentioned in the example of Barclays, a specialized media desk is effective for businesses because they create a relationship with media activities and their constant

Friday, August 23, 2019

Technology as a strategic factor which helps in the development or Essay

Technology as a strategic factor which helps in the development or dismissal of subsequent firms - Essay Example The concept of disruptive innovation as rendered by Clayton M. Christensen is found to be dealt mainly on two aspects. He observed the emergence of disruptive technologies mainly along two ends-disruptions based along low ends and that emerging out from development of new markets. The first set of disruptive technologies is found to produce products which are much cheaper than those produced out of traditional technologies. Further adding to the cheapness of the product the usage of the same is also found to catch a simplistic note which is a little complex than such produced out of existing technologies. Hence products produced out of such disruptive technologies are generally found to gain market in lower economy areas. The second set of disruptive innovations is noted as such as would focus on the creation of new markets for the products which fail to be consumed by the existing market. Further such disruptive innovation helps to create a market for such people who fail to get used to the usage patterns of existing products. Thus this type of innovation helps to create a niche market for the products which were previously regarded as inconsumable. The reason for disruptive innovation practices for producing products at lower ends of the market as bringing in a holocaust for the manufacturing firm can be analysed as follows. It is found firstly that firms tend to invest more on products which are produced through the means of efficient technology and thus are expected to fetch higher returns for the company when being sold out in the market in large scales for their increasing demand. However it must be considered that the pace of emergence of new technology is much faster than that of the growth of market demands for the products. Thus secondly when the products produced out of disruptive innovation practices are rendered in the market in a spontaneous fashion the demand for such also starts rising. To this end it is found that the concern is not in a position to make

Jazz is a Unique Style Essay Example | Topics and Well Written Essays - 1250 words

Jazz is a Unique Style - Essay Example The crowd was generally dressed casually and had a wide variety of listeners, ranging from young people to much older ones. What I quickly noticed was that the environment was not as quiet as I had expected it would be. I had done a bit of research into Jazz music prior to the concert and I expected the audience to be dead quiet and listening attentively to the music. This was not the case as part of the crowd was unbearably noisy as compared to other concerts I have been to. This select part of the crowd did not seem to appreciate the truly beautiful music that was being performed. In past concerts I have been in, loud conversations and disruptive noises were not allowed as this was seen to be distracting the performers as well as the audience who were listening attentively. I was amazed though at how well the musicians performed despite this. It was almost as if they had anticipated it. I believe this goes to prove that these musicians can perform anywhere and still do a stellar jo b. The whole time I was there, I listened closely to the music, watched the performer go through the motions and I felt like my mind and emotions wandered in between the melodies. I must admit that I enjoy the guitar most since my childhood, so watching and listening to John Pisano playing it was very rewarding. I got wrapped up in his playing and got a thrill watching him strum the guitar. This song had a constant tempo; it was slow, funky and had an earthy sound to it which made it sound more like the blues.... I had done a bit of research into Jazz music prior to the concert and I expected the audience to be dead quiet and listening attentively to the music. This was not the case as part of the crowd was unbearably noisy as compared to other concerts I have been to. This select part of the crowd did not seem to appreciate the truly beautiful music that was being performed. In past concerts I have been in, loud conversations and disruptive noises were not allowed as this was seen to be distracting the performers as well as the audience who were listening attentively. The only time there was noise was when the performance was over and everyone applauded. I was amazed though at how well the musicians performed despite this. It was almost as if they had anticipated it. I believe this goes to prove that these musicians can perform anywhere and still do a stellar job. The whole time I was there, I listened closely to the music, watched the performer go through the motions and I felt like my mind and emotions wandered in between the melodies. I must admit that I enjoy the guitar most since my childhood, so watching and listening to John Pisano playing it was very rewarding. I got wrapped up in his playing and got a thrill watching him strum the guitar. I was hooked from the very first song they played. This song had a constant tempo; it was slow, funky and had an earthy sound to it which made it sound more like the blues. John Pisano seemed to be improvising during his  solo on the guitar. The bassist was also magnificent on this particular night and his performance was both strong and melodic. This was evident on the second song, in which he actually picked up his bow and employed the arco technique.

Thursday, August 22, 2019

The Old Man and The Sea Essay Example for Free

The Old Man and The Sea Essay Ernest Hemingway was born on July 21, 1899 in Oak Port, Illinois. Throughout his high school career he excelled in sports, and English class. For fun Hemingway enjoyed the outdoors, which got him into fishing and camping. When he graduated he started to work for The Kansas City Star as a junior reporter. Hemingway got his style of writing from the Kansas City Star’s Style Guide for writing: â€Å"use short sentences, use short first paragraphs, use vigorous language, and be positive, not negative. † He wrote many books, one of them being The Old Man and the Sea, which was also made into a movie. In both the book and the movie, the message being conveyed was to â€Å"Never give up. † They say, â€Å"Life is a journey, it’s not where you end up but its how you got there. † (www. motivationalwellbeing. com) Both the book and the movie have similarities and differences. The book was very descriptive in which you were able to imagine and picture in your head what was actually going on. In the beginning of the book, while the Old Man went out to sea again he saw two porpoises, which he considered to be his friends out across the lonely sea. He said as if they were, playing and making jokes and love with each other. They are our brothers like the flying fish†(Hemingway 44). Also when the bird had landed on his skiff, he told the bird that he needed to be brave, and go before the hawks come. On the other hand the movie followed very closely to the book in which it was almost word for word as the book. The Old Man looked like Hemingway. Logos was shown in the movie by visually getting to see each step that the Old Man took on the boat. The music also helped you predict when something good or bad was going to happen. You were also able to see the boy cry, which is pathos. The messages that Hemingway was trying to convey was perseverance, and to never give up. Hemingway has a unique way of writing. Hemingway’s writing style included short, declarative sentences with the omission of colons, semi-colons, exclamation points, dashes or parentheses. Hemingway wanted his short sentences to build on to each other until they reached a whole storyline. Hemingway also used movie-style techniques such as â€Å"cutting from one scene to the next† quickly. His style of writing was called the â€Å"Iceberg Theory,† because his facts floated above water, and the supportive details or structures holding up the facts were out of sight. The Old Man has wrinkly skin, young eyes the color of the sea, cuts, and scars on his hands. This helped show us how much he has been through. The scars on his hands represent that he has faced hardships, but he has always gotten through them. The new cuts on his hands show that he has not given up and he is still trying. No matter what you will always fall down, but you’re the only thing stopping yourself from getting right back up and moving forward. Throughout his life, he has been presented with challenges to test his strength and endurance. The marlin with which he struggles for three days represents his greatest challenge. Relentlessly fighting off the sharks over and over again, keeping as much of the marlin that he can savage, until he gets back to land, and not letting any outside forces put him down. The Old Man dreaming about Africa and lions represents him reminiscing on his youth and purity, but is now an elderly man, getting weaker by the day. The boy, who had first gone on the Old Man’s boat when he was five, has been a friend to the old man ever since. The boy would always go fishing with the Old Man, but his parents told their son he was no longer on his boat because he had the worst bad luck and he had not caught a fish in over 80 days. It is as if the boy and the Old Man have switched places by the boy being the caretaker for the Old Man, â€Å"the father figure†, and the Old Man being the one who is cared for. Joe DiMaggio also played a role in the storyline even though you never saw him; he was the Old Mans hero. The Old Man worships him as a model of strength and commitment, and his thoughts turn toward DiMaggio whenever he needs to reassure himself of his own strength. Hemingway’s unique style of writing allows readers to easily visualize the plot, as if it was a movie. This is done through short sentences that build on one another. This also allows his books, such as The Old Man and the Sea, to be created into movies that are easily comparable. His use of metaphors and descriptive writing make this possible. Through the use of logos and pathos, Hemingway successfully conveyed the message of perseverance. Over and over again the Old Man was tried, but he never gave up.

Wednesday, August 21, 2019

Gas Chromatography Mass Spectrometry Environmental Sciences Essay

Gas Chromatography Mass Spectrometry Environmental Sciences Essay Gas chromatography-mass spectrometry (GC-MS) is a method that combines the features of gas-liquid chromatography and mass spectrometry to identify different substances within a test sample. [6] Gas chromatography (GC) and mass spectrometry (MS) make an effective combination for chemical analysis. [5, 10] The use of a mass spectrometer as the detector in gas chromatography was developed during the 1950s by Roland Gohlke and Fred McLafferty. These sensitive devices were bulky, fragile, and originally limited to laboratory settings. The development of affordable and miniaturized computers has helped in the simplification of the use of this instrument, as well as allowed great improvements in the amount of time it takes to analyze a sample. In 1996 the top-of-the-line high-speed GC-MS units completed analysis of fire accelerants in less than 90 seconds, whereas first-generation GC/MS would have required at least 16 minutes. This has led to their widespread adoption in a number of fields. [6] GC-MS theory and principle The Gas Chromatography/Mass Spectrometry (GC/MS) instrument separates chemical mixtures (the GC component) and identifies the components at a molecular level (the MS component). It is one of the most accurate tools for analyzing environmental samples. The GC works on the principle that a mixture will separate into individual substances when heated. The heated gases are carried through a column with an inert gas (such as helium). As the separated substances emerge from the column opening, they flow into the MS. [3] The GC separates the constituents of a sample as previously described, but as the gaseous sample exits the column and enters the Mass Spectrometer, it is bombarded with electrons that cause the molecules to become unstable and break down into charged fragments. The positive ions are collected and separated on the basis of their mass / charge ratio.   Various analyser types are available depending on what is being studied.   We have both a quadrupole type MS and an ion trap type MS available. The resulting mass spectra permit the identification of the analytes.   A typical detection limit would be 10 picograms which make it much more sensitive than the flame ionising detector on a GC. [2] To effectively use GC/MS evidence one must understand the process.   First, the GC process will be considered, and then the MS instrument will be presented.  [5] Gas chromatography In general, chromatography is used to separate mixtures of chemicals into individual components. Once isolated, the components can be evaluated individually. In gas chromatography (GC), the mobile phase is an inert gas such as helium. The mobile phase carries the sample mixture through what is referred to as a stationary phase. The stationary phase is a usually chemical that can selectively attract components in a sample mixture. The stationary phase is usually contained in a tube of some sort. This tube is referred to as a column. Columns can be glass or stainless steel of various dimensions. The mixture of compounds in the mobile phase interacts with the stationary phase. Each compound in the mixture interacts at a different rate. Those that interact the fastest will exit (elute from) the column first. Those that interact slowest will exit the column last. By changing characteristics of the mobile phase and the stationary phase, different mixtures of chemicals can be separated. Further refinements to this separation process can be made by changing the temperature of the stationary phase or the pressure of the mobile phase. The capillary column is held in an oven that can be programmed to increase the temperature gradually (or in GC terms, ramped). This helps our separation. As the compounds are separated, they elute from the column and enter a detector. The detector is capable of creating an electronic signal whenever the presence of a compound is detected. The greater the concentration in the sample, the bigger the signal. The signal is then processed by a computer. The time from when the injection is made (time zero) to when elution occurs is referred to as the retention time (RT). While the instrument runs, the computer generates a graph from the signal. This graph is called a chromatogram. Each of the peaks in the chromatogram represents the signal created when a compound elutes from the GC column into the detector. The x-axis shows the RT, and the y-axis shows the intensity (abundance) of the signal. [1] Figure: schematic diagram of gas chromatography 3.1 Mass spectrometry Mass spectrometry (MS) is a technique used for characterizing molecules according to the manner in which they fragment when bombarded with high-energy electrons, and for elemental analysis at trace levels. Therefore, it is used as a means of structural identification and analysis. Its widest application by far, is for the structural elucidation of organic compound. MS involves the ionization (conversion of molecules into positively charged ions) and fragmentation of molecules. Various methods are available to effect such a process: e.g. (i) Electron impact ionization, by far the most common mode used, (ii) Chemical ionization, (iii) Field ionization or (iv) Fast atom bombardment. In the more commonly used electron impact (EI) mode, the sample molecules are bombarded in the vapour phase with a high-energy electron beam in the instrument known as a mass spectrometer. This process generates a series of positive ions having both mass and charge, which are subsequently separated by deflection in a variable magnetic field according to their mass to charge (m/z) ratio. This results in the generation of a current (ion current) at the detector in proportion to their relative abundance. The resulting mass spectrum is recorded as a series of lines or peaks of relative abundance (vertical peak intensity) versus m/z ratio. The sample is introduced into the inlet system, where it is heated and vaporized under vacuum, and then bled into the ionization chamber (ion source) through a small orifice. Sample sizes for liquids and solids range from milligrams to less than a nanogram, depending on the detection limits of the instrument. Once the gas stream from the inlet system en ters the ionization chamber, it is bombarded at right angles by an electron beam (70 eV) emitted from a hot filament. Only Ëœ20eV is needed to remove one electron from the molecule, to create M+, the remainder is used to fragment the molecular ion into a mixture of radical cations, cations and free radicals. The positively charged ion fragments are then forced through a series of negatively charged accelerating slits towards the mass analyser, where separation of these ion fragments takes place. This analyser tube is an evacuated curved metal tube through which the ion beam passes from the ion source to the ion collector. In early instruments, the fragment ions were deflected in a curved path by a magnetic field only. Mass separation depended on the magnetic field strength, the radius of curvature of the magnetic field and the magnitude of the acceleration voltage. The introduction of an electrostatic field after the magnetic field in later instruments permitted higher resolution so that the mass readings could be obtained to four decimal places. In present day instruments, this double focusing system has been further modified to optimize resolution and most instruments now use a quadrupole mass analyser to effect separation of the ion fragments. The ions are collected one set at a time, with the aid of collimating slits, in the ion collector, where they are also detected and amplified by an electron multiplier. Mass spectral data is recorded on computer. Most mass spectrometers are computer controlled nowadays, and scans from mass ranges 12 to > 700 amu, Can be performed in seconds. [4] Figure: schematic diagram of mass spectrometry 4. Instrumentation of GC-MS The insides of the GC-MS, with the column of the gas chromatograph in the oven on the right. The GC-MS is composed of two major building blocks: the gas chromatograph and the mass spectrometer. The gas chromatograph utilizes a capillary column which depends on the columns dimensions (length, diameter, film thickness) as well as the phase properties (e.g. 5% phenyl polysiloxane). The difference in the chemical properties between different molecules in a mixture will separate the molecules as the sample travels the length of the column. The molecules take different amounts of time (called the retention time) to come out of (elute from) the gas chromatograph, and this allows the mass spectrometer downstream to capture, ionize, accelerate, deflect, and detect the ionized molecules separately. The mass spectrometer does this by breaking each molecule into ionized fragments and detecting these fragments using their mass to charge ratio. Figure: GC-MS schematic These two components, used together, allow a much finer degree of substance identification than either unit used separately. It is not possible to make an accurate identification of a particular molecule by gas chromatography or mass spectrometry alone. The mass spectrometry process normally requires a very pure sample while gas chromatography using a traditional detector (e.g. Flame Ionization Detector) detects multiple molecules that happen to take the same amount of time to travel through the column (i.e. have the same retention time) which results in two or more molecules to co-elute. Sometimes two different molecules can also have a similar pattern of ionized fragments in a mass spectrometer (mass spectrum). Combining the two processes makes it extremely unlikely that two different molecules will behave in the same way in both a gas chromatograph and a mass spectrometer. Therefore when an identifying mass spectrum appears at a characteristic retention time in a GC-MS analysis, i t typically lends to increased certainty that the analyte of interest is in the sample. [6] Figure: schematic of GC/MS 4.1 Inlet system Samples are introduced to the column via an inlet. This inlet is typically injection through a septum. Once in the inlet, the heated chamber acts to volatilize the sample. [6] 4.1.1 GC-MS interface In this GC-MS system, the link between the two instruments is called an interface; it is like a jet separator, whose purpose is to (1) enrich the sample and (2) adjust the vacuum to the high vacuum conditions needed for MS analysis of the column eluent. [11, 9] After separation of our components by the GC, we need away to introduce this sample into MS- interface. An ideal interface should be Qualitatively transfer all analyte Reduce pressure flow/from chromatograph to level MS can handle Not cost an arm (or a leg) No interface meets all requirements The major goal of the interface is to remove all of the carrier gas from- the majority of the effluents. Interface should cover Molecular separator Permeation separator Open split Capillary direct [7] 4.1.2 Molecular separator It is the most popular approach when packed columns are used and based on the relative rate of diffusion. In this the smaller molecules will diffuse more rapidly and most will miss the MS entry jet. The larger molecules will diffuse more slowly will tend to lead the MS entry jet. [7, 11] Figure: molecular separator Advantages of molecular separator It is relatively simple and inexpensive approaches Disadvantages Rate of diffusion is molecular weight dependent If jet becomes partially plugged, you can end up with an excellent carrier gas detector [7] 4.1.3 Permeation interface A semi permeation membrane is placed between the GC effluents and the MS The major problem with this approach is Membrane selectively based on polarity and the molecular weight slow to respond. Only a small fraction analyte actually permeates through the membrane. [7] Figure: permeation membrane 4.1.4 Open or split interface In a split system, a constant flow of carrier gas moves through the inlet. A portion of the carrier gas flow acts to transport the sample into the column. [6] The chromatographic column leads to a T-shaped that contains a smaller diameter tube. A platinum or deactivated fused silica capillary also leads to this tube and goes into the mass spectrometer source. The capillary is kept into the vacuum sealed device and is heated to avoid condensation. The T-shaped tube is closed at both ends but is not sealed, so that pressure remained equal to the atmospheric pressure. A helium gas is continuously passed to avoid any reaction of the gas. [9] The MS pulls the analyte in about 1mL/min through a flow restrictor. If flow is above that the excess is vented. If it is below the He from the external source is pulled in. it is the best source for that have flows close to1mL/min like capillary columns.[7] Figure: open or split interface Advantages Any gas producing source will be used. Relatively low cost and easy to use. Disadvantages Sample leaves columns in split. Split changes as flow change. Split system is preferred when the detector is sensitive to trace amounts of analyte and there is concern about overloading the column [7] 4.1.5 Capillary direct interface This coupling consists of having the capillary column directly entering the spectrometer source by a set of vacuum- sealed joints. Here the pumping is not the problem because the capillary is very long. A length of at least 1.5m is necessary for the column with inside diameter of 0.25mm. [9] If we limit the GC to the capillary column only, the MS can actually use all column effluents. [7] The carrier gas flow gets directed to purge the inlet of any sample following injection (septum purge). Yet another portion of the flow is directed through the split vent in a set ratio known as the split ratio. [6] Figure: capillary direct interface Advantages Low cost simple device No dead volume No selectivity Disadvantages Limits flow range that column can be used Limits the column ID that can use Part of column lost which serve as a flow restrictor[7] 4.2. Vacuum system In order to the MS process to work, it must be conducted under vacuum condition. The major reason for this is to increase the mean free path. The average distance that ions or molecules will travel before colliding with another ion or molecule. A high mean free path is to ensure predictable and reproducible high sensitivity and reliable mass analysis. [7] Since a vacuum is required to work a detector; Detectors are design to use the vacuum as an insulator Large voltage are used in the MS Operation of the detector in the absence of the vacuum that can cause severe damage Most instrument prevent operation if the vacuum is not high enough A vacuum is produced by using a combination of the two pumps- two stage vacuum pumps. The rotary pumps produced vacuum 102-104torr. These are the turbomolecular or diffusion pumps work in the range of 105torr. These are actually like the compressor. 4.2.1 Turbomolecular pump It relies on the series of blades or the air foils that tend to deflect the gas. It able to produce the clean vacuum in few hours and reliable Disadvantage It is expensive, short life time, can become noisy Figure: turbomolecular pump 4.2.2 Oil diffusion pumps It is another important type of the pump that produce high vacuum. These are reliable, maintenance free and quite but take much time and due to poor design oil enters into the vacuum. [7] Figure: oil diffusion pumps 4.3. Ionization A number of ionization techniques available Figure: types of ionization 4.3.1 Types of ionization After the molecules travel the length of the column, pass through the transfer line and enter into the mass spectrometer they are ionized by various methods with typically only one method being used at any given time. Once the sample is fragmented it will then be detected, usually by an electron multiplier diode, which essentially turns the ionized mass fragment into an electrical signal that is then detected. The ionization technique chosen is independent of using Full Scan or SIM. [6] 4.3.1.1 Electron Ionization By far the most common and perhaps standard form of ionization is electron ionization (EI). The molecules enter into the MS (the source is a quadrupole or the ion trap itself in an ion trap MS) where they are bombarded with free electrons emitted from a filament, not much unlike the filament one would find in a standard light bulb. The electrons bombard the molecules, causing the molecule to fragment in a characteristic and reproducible way. This hard ionization technique results in the creation of more fragments of low mass to charge ratio (m/z) and few, if any, molecules approaching the molecular mass unit. Hard ionization is considered by mass spectroscopists as the employ of molecular electron bombardment, whereas soft ionization is charge by molecular collision with an introduced gas. The molecular fragmentation pattern is dependant upon the electron energy applied to the system, typically 70eV (electron Volts). The use of 70eV facilitates comparison of generated spectra with Na tional Institute of Standard (NIST-USA) library of spectra applying algorithmic matching programs and the use of methods of analysis written by much method standardization Chemical Ionization. [6, 10] Figure: EI graph Figure: EI source 4.3.1.2 Chemical Ionization In chemical ionization a reagent gas, typically methane or ammonia is introduced into the mass spectrometer. Depending on the technique (positive CI or negative CI) chosen, this reagent gas will interact with the electrons and analyte and cause a soft ionization of the molecule of interest. A softer ionization fragments the molecule to a lower degree than the hard ionization of EI. One of the main benefits of using chemical ionization is that a mass fragment closely corresponding to the molecular weight of the analyte of interest is produced. Figure: CI source Positive chemical Ionization In Positive Chemical Ionization (PCI) the reagent gas interacts with the target molecule, most often with a proton exchange. This produces the species in relatively high amounts. Negative Chemical Ionization In Negative Chemical Ionization (NCI) the reagent gas decreases the impact of the free electrons on the target analyte. This decreased energy typically leaves the fragment in great supply. [6, 7] Figure: comparison of graph obtain from EI and CI 4.4 Mass analyzer A mass analyzer or filter is the portion of the mass spectrometer that is responsible for resolving different mass fragments. Typically all ions will move with same kinetic energy (1/2mv2). Some aspects of these accelerated ions are exploited as the basis for resolving them. 4.4.1 Types of mass analyzers There are following types of mass analyzers Magnetic Electrostatic Time of flight Quadrupole mass filter Quadrupole ion storage(ion trap) The last two types are most commonly used in GC/MS systems although time of flight making a come back [7, 6, 10] 4.4.1.1 Quadrupole mass filter It consists of four rods. Figure: rods of quadrupole Rods operate in pairs (x or y) and each carries a voltage. Only ions of proper m/z value can successfully traverses the entire filter (z axis). The high pass rods filter out ions with too low of an m/z. the low pass filter outs the ions with too high of an m/z value. [7] Figure: schematic of quadrupole 4.5 Detector Ion detection Once the ions are separated, we need a way to convert them to a response that can be used. An electron multiplier is the most common type of detector used. It is a continuous dynode type of detector. The inner surface of the detector is electroemassive material. When struck by ion electrons are ejected. Due to increasing potential, the electrons are accelerated and when they strike another surface, even more electrons are ejected. This significantly amplifies our signals. [7] Figure: detector 4.6 Data system Data system is the heart of our GC/MS system. Without it we would have no way to deal with the vast amount of information that even a single GC/MS analysis produce. Inexpensive fast desktop are the single most important advance in GC/MS. [7] Figure: data system 4.7 Method of analysis The primary goal of instrument analysis is to quantify an amount of substance. This is done by comparing the relative concentrations among the atomic masses in the generated spectrum. Two kinds of analysis are possible, comparative and original. Comparative analysis essentially compares the given spectrum to a spectrum library to see if its characteristics are present for some sample in the library. This is best performed by a computer because there are a myriad of visual distortions that can take place due to variations in scale. Computers can also simultaneously correlate more data (such as the retention times identified by GC), to more accurately relate certain data. Another method of analysis measures the peaks in relation to one another. In this method, the tallest peak is assigned 100% of the value, and the other peaks being assigned proportionate values. All values above 3% are assigned. The total mass of the unknown compound is normally indicated by the parent peak. The value of this parent peak can be used to fit with a chemical formula containing the various elements which are believed to be in the compound. The isotope pattern in the spectrum, which is unique for elements that have many isotopes, can also be used to identify the various elements present. Once a chemical formula has been matched to the spectrum, the molecular structure and bonding can be identified, and must be consistent with the characteristics recorded by GC/MS. Typically, this identification done automatically by programs which come with the instrument, given a list of the elements which could be present in the sample. A full spectrum analysis considers all the peaks within a spectrum. Conversely, selective ion monitoring (SIM) only monitors selected peaks associated with a specific substance. This is done on the assumption that at a given retention time, a set of ions is characteristic of a certain compound. This is a fast and efficient analysis, especially if the analyst has previous information about a sample or is only looking for a few specific substances. When the amount of information collected about the ions in a given gas chromatographic peak decreases, the sensitivity of the analysis increases. So, SIM analysis allows for a smaller quantity of a compound to be detected and measured, but the degree of certainty about the identity of that compound is reduced. [6] 5. Applications 5.1. Environmental Monitoring and Cleanup GC-MS is becoming the tool of choice for tracking organic pollutants in the environment. The cost of GC-MS equipment has decreased significantly, and the reliability has increased at the same time, which has contributed to its increased adoption in environmental studies. There are some compounds for which GC-MS is not sufficiently sensitive, including certain pesticides and herbicides, but for most organic analysis of environmental samples, including many major classes of pesticides, it is very sensitive and effective. 5.2. Criminal Forensics GC-MS can analyze the particles from a human body in order to help link a criminal to a crime. The analysis of fire debris using GC-MS is well established, and there is even an established American Society for Testing Materials (ASTM) standard for fire debris analysis. GCMS/MS is especially useful here as samples often contain very complex matrices and results, used in court, need to be highly accurate. 5.3. Law Enforcement GC-MS is increasingly used for detection of illegal narcotics, and may eventually supplant drug-sniffing dogs. It is also commonly used in forensic toxicology to find drugs and/or poisons in biological specimens of suspects, victims, or the deceased. 5.4. Security A post-September 11 development, explosive detection systems have become a part of all US airports. These systems run on a host of technologies, many of them based on GC-MS. There are only three manufacturers certified by the FAA to provide these systems, one of which is Thermo Detection (formerly Thermedics), which produces the EGIS, a GC-MS-based line of explosives detectors. The other two manufacturers are Barringer Technologies, now owned by Smiths Detection Systems and Ion Track Instruments, part of General Electric Infrastructure Security Systems. 5.5. Food, Beverage and Perfume Analysis Foods and beverages contain numerous aromatic compounds, some naturally present in the raw materials and some forming during processing. GC-MS is extensively used for the analysis of these compounds which include esters, fatty acids, alcohols, aldehydes, terpenes etc. It is also used to detect and measure contaminant from spoilage or adulteration which may be harmful and which is often controlled by governmental agencies, for example pesticides. 5.6. Astrochemistry Several GC-MS have left earth. Two were brought to Mars by the Viking program. Venera 11 and 12 and Pioneer Venus analysed the atmosphere of Venus with GC-MS. The Huygens probe of the Cassini-Huygens mission landed one GC-MS on SaturnHYPERLINK http://en.wikipedia.org/wiki/SaturnsHYPERLINK http://en.wikipedia.org/wiki/Saturnss largest moon, Titan. The material in the comet 67P/Churyumov-Gerasimenko will be analysed by the Rosetta mission with a chiral GC-MS in 2014. 5.7. Medicine In combination with isotopic labeling of metabolic compounds, the GC-MS is used for determining metabolic activity. Most applications are based on the use of 13C as the labeling and the measurement of 13C/12C ratios with an isotope ratio mass spectrometer (IRMS); an MS with a detector designed to measure a few select ions and return values as ratios. [6]

Tuesday, August 20, 2019

The multiple emulsions

The multiple emulsions Introduction: Seifriz started his pioneering work about multiple emulsions since 1925, which is regarded as the fundamental knowledge in the later research. Multiple emulsions are complicated systems which are considered as emulsions of emulsions (Garti, 1996).In the outer continues phase, the droplets of the dispersed phase named as globules which contain even smaller dispersed droplets ,the globules are separated from each other in external continues phase by a layer of oil phase film. In the inner phase, the droplets are departed from each other by oil phase (Benichou et al. 2006). It is widely believed that there exist two primary types of multiple emulsions, one is water-in-oil-in-water (W1/O/W2) emulsions that an w/o emulsion is dispersed in another aqueous phase (W2) and the other is oil-in-water-in-oil (O1/W/O2) emulsions that an o/w emulsion is dispersed in another oil phase(O2). In the previous study, water-in-oil-in-water (W1/O/W2) multiple emulsions have accounted for a vital role in t he research of multiple emulsions , because the applications of W1/O/W2 multiple emulsions plays an important role in the food industry and it is also easier for us to select various of hydrophilic emulsifiers which are safe to health as stabilizers in preparation of multiple emulsions (Pays et al., 2002). As shown in Fig. 1, take water-in-oil-in-water (W1/O/W2) double emulsions as an example, which are composed of three distinct phases : an internal aqueous phase (W1), which containing many aqueous soluble ingredients. Various internal aqueous droplets are encapsulated in an oil phase (O), which is included in external aqueous phase (W2) (Garti, 1996). Applications of multiple emulsions It is widely believed that the potential applications so numerous that the research in such promising area can bring beneficial effects, especially in products areas ,such as drug-delivery systems, cosmetics, and foods . Water-in-oil-in-water (W1/O/W2) emulsions allow the encapsulation of active ingredients which have the ability to be soluble in the internal aqueous phase, thus it is possible to hide smell of some matter; remove toxic substance; or select appropriate conditions to realize controlled release of the active ingredients under certain process of emulsification. (Kanouni et al. 2002) On the basic of slow and sustained release of active ingredients from an internal reservoir into the external aqueous phase, the main function of double emulsions is regarded as an internal reservoir to entrap ingredients whatever you choose into the inner confined space, in order to protect against oxidation, light and enzymatic degradation. As a result, sensitive and active molecules can be protected from the external phase by the function of internal reservoir. In addition, because of the phenomenon of release of water or ingredients which can be observed in the experiments, the active ingredients will exist partly in the internal aqueous phase, partly in the oil phase and occasionally in the external phase(Benichou et al. 2004) .In the food industry, double emulsions provide some advantages because of their capability to encapsulate some water-soluble substances, such as flavours or active ingredients which are then slowly released from the internal compartments. Additionally, we should select food-grade additives which is soluble in the internal aqueous phase because the consumer products in food industry will be applied in our daily lives. Furthermore, as the development of needs in food quality, the production of low calorie and reduced fat products come into food market. (Muschiolik, 2007; Van der Graaf et al., 2005). In agrochemical industry, it has become incr easingly difficult for scientists to produce products, such as pesticides which are effectively and simultaneously friendly to the environment. According to ElShafei et al. (2009), the idea of multiple emulsions has been successfully applied to the agriculture products and the multiple emulsions are relatively stable even on storage at room temperature and 4 ?for 30 days. As government increasingly pay attention to the safe and environmentally friendly products, the research in this orientation has draw publics attention. Till now, no pharmaceutical multiple emulsions have been brought to the market, because potential emulsifiers used in multiple emulsions are only available in cosmetic grade but not be applied in pharmaceutical grade. (Schmidts et al., 2009 ) In cosmetic area, the possibility of combining incompatible substances in products in order to offer more favorable functions. (Vasiljevic et al., 2005) multiple emulsions also have the potential to change the commonly oily feel of hand-cream to aqueous texture. The advance of products of cosmetics has brought out more space to develop in order to get more profits. (Kanouni et al., 2002) Methods of preparation: Scientists have done some research in multiple emulsions as the applications provide us more convenience and bring better consumer products in many areas. Because double emulsions have more complex structure and are even more thermodynamically unstable than single emulsions, they prone to be difficult to prepare, especially on an industrial scale. The difficulties of preparation of multiple emulsions have draw scientists attention, so many research have been pour into this area. In general, there exist single -step and two -step emulsification methods to prepare multiple emulsions (Allouche et al., 2003). Due to a multiple emulsion is considered as a mesophase between O/W and W/O emulsion, the one-step method of preparation means a combination of the two different types of emulsions and surfactant phase, which is very difficult to control. So, such method will not be chosen in the preparation (Matsumoto, 1987; Mulley and marland, 1980). On the basic of previous study, the two-step emulsification process is considered as the most common and better controlled method. First of all, W1/O emulsions are much easier to prepare and it is also easy to control various characteristics in these emulsions as the parameters in them are relatively limited . Secondly, in the second step, it is widely believed that the complex structure and variable quantities result in relatively difficult to control or regulate. Many methods have been commonly used to improve the preparation of multiple emulsions, adding suitable emulsifiers is regarded as one of the most significant one. In general, two kinds of emulsifiers are introduced to add in the preparation of multiple emulsions as the difference of their functions. Because of the different affinity of the emulsifiers, hydrophobic emulsifier Emulsifier I which is used in the oil phase and hydrophilic emulsifier Emulsifier II which is used in the external aqueous phase (Garti, 1996). The hydrophobic emulsifier is designed to stabilize the interface of the W1/O internal emulsion and the hydrophilic emulsifier acts as stabilizer at the external interface of W1/O/W2 emulsion. The main function of emulsifiers is enhancing the stability of multiple emulsions in the preparation and even the long-time storage. The process of two-step preparation is shown in Fig.2. In the first step, the primary W/O emulsion is prepared under high-shear conditions (homogenization) to obtain small droplets, whereas the second step is carried out with less shear in order to avoid rupturing the internal droplets because the second step i s much difficult to control than the first step (van der Graaf et al., 2004). On the basic of Kanouni et al., (2002)s earlier work, in the first step, they usually use an Ultra-Turrax mixer with a relatively high speed to prepare a W1/O emulsion which is a combination of internal aqueous phase and an appropriate oil phase with suitable low HLB emulsifier; in the second step, the W1/O/W2 multiple emulsions will be produced by adding proper high HLB emulsifiers using Ultra-Turrax mixer or mechanical agitator with relatively smaller rotation speed. In the previous study, stirring apparatuses, rotor-stator systems and high pressure homogenizers are considered as the most commonly and conventional emulsification devices (Schubert and Armbruster, 1992).As shown in table 1. the functions and disadvantages has been tabulated. There are several drawbacks in such existing methods of production ( Williams et al.,1998). First of all, it is not easy for us to control the droplet size and droplet size distribution of the final multiple emulsions products. Secondly, it is difficult to scale up because different classes of the products are generated per batch on the same manufacture conditions, which contribute to one of the main factors why such products can not be applied in the industry. Moreover,van der Graaf et al. (2005) illustrate that conventional methods are not feasible in preparation of double emulsions, because high-shear stresses can result in rupture of the internal emulsions which should be avoided in the secondary emulsification (van der Graaf et al., 2005) Different kinds of emulsification devices can generate various multiple emulsions with different conditions, such as droplet size, encapsulation efficiency, release rate, and so on. What has interested the scientists most recently is researching novel approaches to improve the emulsification equipment in order to generate more stable and ideal multiple emulsions. Much attention has been put in the improvement of the second step by using various pieces of equipment and novel method. Nakashima et al. (1991) points out that membrane emulsification is widely accepted as one of the new method for the production of emulsions recently( Nakashima et al., 1991). This technique is increasingly attracted because of its low energy consumption, the better control of droplet size and droplet size distribution and especially the mildness of the process, especially suitable to be used in the second step to prevent rupture of the double emulsion droplets (van der Graaf et al., 2005). Joscelyne and Tragardh (1998) demonstrate that it is favourable to prepare small droplets when the conditions are higher concentrations of emulsifiers, high wall shear stress through a membrane with small pore size. As shown in Fig.3. because of the mild conditions in the process of membrane emulsification, it is easier to produce small size droplets and protect the multiple emulsions from membrane rupture, especially useful in the second step of emulsification. The system chosen ceramic membranes of different average pore size to prepare relative small droplets in multiple emulsions because such kinds of emulsions more stable. Membrane technology can be applied to the many productions, such as oil-in-water (O/W) emulsions t, UHT products and so on (Joscelyne and Tragardh 1998) .However, low flux of the dispersed phase is the main and visible drawback of membrane emulsification (Charcosset et al., 2004),which is caused by the properties of membranes with a low hydraulic . In general, two methods are commonly introduced in membrane emulsification: cross-flow membrane emulsification and pre-mix membrane emulsification (Suzuki et al., 1998). Take pre-mix membrane emulsification as an example, as shown in Fig.4. the most significant advantages of such method is it can provide high flux, which can improve the membrane emulsification process. Various novel methods have been reported to improve the disadvantage of membrane emulsification. (Gijsbertsen-Abrahamse et al., 2004) for example, with the advance in nano- and micro engineering, it is possible to produce membranes with a low hydraulic resistance named microsieves. (Van Rijn et al., 2005) Microsieves, inorganic membranes, which can offer a very thin selective layer, high controlled pore size and shape, and smooth surfaces. As shown in Fig.5., SEM images of pore morphology of a silicon nitride microsieve surface. Microsieve membranes contribute to flux decline in crossflow filtration of bovine serum albumin (BSA) solutions. (Giron`es et al.,2006) According to Shnji Sugiura et al., (2003), monodispersed multiple emulsions which are good at providing relatively stable conditions are regularly applied in industries and basic studies, on the basic of easier observation, monodispersed emulsions are regarded as an effective approach in determining the resistance to coalescence of an emulsion, and in observing how the active matter go through the oil film by diffusion. (Sugiura et al., 2003) Furthermore, a microfabricated channel array has been pointed out as a promising method for preparing monodisperse emulsion droplets (Kawakatsu et al., 1997). This type of emulsification technique is called microchannel (MC) emulsification, which is regarded as a novel method for preparing monodisperse emulsions. Owning to the advantages of this technique, it is a promising technique to improve the stability of multiple emulsions. (Kawakatsu et al., 2001; Sugiura et al., 2001 ). Nakagawa et al.(2004) suggest that monodisperse surfactant-free mic rocapsules can be produced by MC emulsification using gelatin. Of course, this technique need further study to improve its low production rate. Improvements in stability of multiple emulsions In practice, significant problems may arise, not only the thermodynamic instability of emulsions, but also many destabilization phenomenon, such as flocculation, coalescence and creaming, have contribute to the unstable emulsions (Vasiljevic et al., 2005). In order to protect the emulsions from the formation of flocculation or coalescence, two methods have been introduced to protect the droplets from each other, one is increasing viscosity of the external phase, the other is energy barrier. The DLVO theory is commonly applied to explain colloidal stability. when the distance between two colloid particles is increasing from small to large, the resulting potential is rage from negative to positive because the existence of attraction potential and repulsion potential ( Friberg, 1997). Various factors may have an effect on the stability of multiple emulsions, including the method of preparation, the oil type, type and concentration of the emulsifier and so on (Vasiljevic et ,al. 2005). On the basic of fundamentally experimental data, we choose the relatively suitable and effective conditions to prepare multiple emulsions. Many research have been put into how to improve the stability of multiple emulsions because thermodynamically unstable multiple emulsions not only exist in the process of preparation ,but also occur during storage or on exposure to environmental stresses such as mechanical forces, thermal processing, freezing or dehydration. On the basic of developed techniques, we can observe or measure the leakage of the inner aqueous phase(W1) in the outer phase and destabilization properties of the emulsions. There are four mechanisms explaining the instability of W1/O/W2 multiple emulsions: (1) the instability comes from the inner aqueous droplets because of coalescence; (2) the instability comes from the oil droplets because of coalescence; (3) rupture of the oil film (4) transport of water and ingredients through the oil layer (Appelqvist et al., 2007,; Florence and Whithill,1981; der Graaf et al., 2005). In the real conditions, there may exist more than one mechanism in the multiple emulsions, different results to different situations. The determining of primary mechanisms exist in certain multiple emulsions should dependent on the experimental data and convincing analysis. What should we do is research more reasonable methods to solve the problem of thermodynamically unstablity in multiple emulsions. Three kinds of approach aiming at improving stabilization and slow solute release have been list as follows (Davis et al., 1985) : (1) stabilization of the inner W1/O emulsion, for example, the addition of various emulsifier combinations (Apenten and Zhu, 1996; Shima et al., 2004; Su et al., 2006); (2) stabilization of oil phase by choosing suitable oil type and the addition of proper carriers, complexants and viscosity builders, for instance, the solidification of the oil phase and the modification of the solubility and polarity of the oil phase to make it less water soluble (Tedajo et al., 2001); (3) stabilization of the external aqueous phase, such as increasing the viscosity of the outer aqueous phase (-zer, et al., 2000). Although many strategies have been categorized above, a majority of them are not suitable to apply in food industry because they are not easily scaled up in industry or they include not food- grade ingredients entrapped in multiple emulsions, which may make a bad influence on human health. So, there exists numerous space for us to research in the methods of improving the stability of multiple emulsions. (ORegan and Mulvihill, 2009) In general, many factors contribute to the improvement of stability of multiple emulsions as some research have deeply determined the main causes of thermal unstable phenomenon and flocculation, coalescence and creaming phenomenon. The nature and internal properties of surfactants or emulsifiers play a vital role in solving problem. Stability of multiple (Opawale, et al., 1998) emulsions has been shown to be dependent on emulsifier interfacial film strength, ionic strength, various additives, and concentration. According to Vasiljevic et al. (2005), when the concentration of emulsifier in oil phase is higher, the multiple emulsions will have lower droplet size, higher viscosity and elastic characteristics. Moreover, changing the concentration of surfactants, results in the difference of the amount of retinol released from silica particles. In addition, different polymers which are added into the aqueous phase, the encapsulation efficiency of retinol was also changed (Hwang et al., 2005). The process of multiple emulsion formation and various destabilization processes can be determined by video microscopy (Ficheux et al., 1998). A unique dimpled structure is a signal to show the deformation of the multiple droplets and coalescence of the internal dispersed phase by coverslip pressure. If the multiple emulsions po ssess relatively high stability, then such structure come out for long-time observed in the presence of adequate concentrations of surfactants and additives. So, Formation of the dimple structure is linked with interfacial film strength and long-term multiple emulsion stability (Jiao et al., 2002). The long-term stability of the double emulsion requires a balance between the Laplace and osmotic pressures among droplets in W1, because a stable W1/O emulsion is a fundamental and significant step in order to prepare a stable W1/O/W2 double emulsion. Garti (1996) illustrate the concept of weighted hydrophile-lipophile balance (HLB) is important because the value is linked with the droplet size, the number of W1 dispersed in inner phase and the stability of the W/O/W multiple emulsions. Such properties are so significant in preparing relatively stable multiple emulsions that the weighted HLB value is considered as a potential reference to select the optimal type of emulsifiers in forming multiple emulsions. In the first step of preparation, HLB(I) stands for the HLB value of the hydrophobic emulsifier, CI means the weight percentage of the hydrophobic emulsifier in the fundamental W1/O emulsion, In the second step of preparation, HLB(II) stands for the HLB value of the hydrophilic emulsifier, and CII means the weight percentage of the hydrophilic emulsifier in the W1/O/W2 multiple emulsion It was observed that using a combination of an amphoteric high HLB surfactant and an anionic surfactant can prepare a stable system(Kanouni et al., 2002). The inner phase is demonstrated to be better stabilized by minimizing the size of droplets and forming microemulsion droplets or microsphere particles, or applying more potential surfactants in order to seal the active ingredients in the interface (ORegan and Mulvihill, 2009). Choosing of optimal surfactants has made a positive effect on controlling particle size in multiple emulsions. Sepideh Khoee and Morteza Yaghoobian (2008) propose that the mean diameters of nanocapsules containing penicillin-G are linked with the properties of surfactants. that is to say, the different types or content of surfactant used in formation of multiple emulsions can result in different droplets size. N. Heldt et al. (2000) point put that changing the ratio of lecithin/SXS make an effect on the average size of the corresponding vesicles in the oil-wa ter emulsion. In addition, egg lecithin considered as hydrophobic substance, sodium xylenesulfonate (SXS) acts as the hydrophilic matter. As the ratio goes up, the average vesicle size increases correspondingly. Stability can be improved by offering suitable stabilizer because the surfactants act as film former and barrier to the release at internal interface(Khoee and Yaghoobian, 2008). Two charged biopolymers, whey protein isolate (WPI) and enzymatic modified pectins, interacted in aqueous solution to form a charge-charge complex which acts as a hydrophilic polymeric steric stabilizer improving the multiple emulsion stability .Regulating the conditions to get the result of most relatively stable condition. For example, as pH can determin the size of the complex ,when pH =6, the most stable double emulsion are gained because of the smallest droplet size, the lowest creaming, highest yield, and minimized water transport(Lutz et al., 2009). Henry et al. (2009) have studied six emulsifiers in their experiments, it is shown that as the amount of emulsifier increased, the phenomenon of coalescence occurs go down. Furthermore, droplet size is dependent on both break-up and re-coalescence events in emulsification, for example, when the surfactant concentration is lower, the droplet size is prone to a result of multiple break-up events. It is shown in the results of experiments that the frequency of droplet coalescence is decreased to a minimum as the process of preparation is under an optimal surfactant concentration, which balances the formation of the smaller possible droplets and relatively stable in preparation and long time storage. On the basic of experimental results which is analyzed by equilibrium phase diagram as well as observed through polarization microscopy, Yihan Liu et al. (2009) have got the conclusion that certain type of multiple emulsions which a liquid crystal can be formed by the surfactant with water are more stable compared to counterparts with no liquid crystals exist in the surfactant but prepared in the same condition(Liu and Friberg, 2009). Garti and Aserin (1996) propose that macromolecules together with monomeric surfactants can be served as steric stabilizers to improve the stability of multiple emulsions. The synthetic polymeric surfactants are ideal interfacial barrier to improve thermodynamic stability and entrapment, which is very helpful in reducing release rate of entrapped additives,and preparing smaller double emulsions with long-time stability. Take WPI-polysaccharide conjugates as an example, compared with monomeric surfactants used only, the application of polymeric emulsifiers results in better encapsulation and controlled release of addenda (Benichou et al., 2006). Transport mechanism in multiple emulsions Various kinds of possible mechanisms have been illustrated to interpret how the substances transport through the oil phase. Oil soluble substances just transport through the oil phase by diffusion which is served as controlled mechanism. Many factors contribute to the transport rate, such as the properties of oil phase, the nature of ingredients, and the conditions of aqueous phase (Chang et al., 1987) .In the previous study, it is found that water and water soluble substance can easily migrate through the oil phase. Kita et al. (1977) demonstrate that two possible mechanism can be applied to interpret the phenomenon of transportation: (1) reverse micelle transport; (2) diffusion across a very thin lamella. Cheng et al. (2006) demonstrate that both Cl- and Ag+ can transport through a thick oil film through observing and measuring the formation of AgCl precipitate in the W1/O/W2 multiple emulsion. Ions can not transport through the oil film which is very thin (