The Journal of American Business Review, Cambridge
Vol. 3* Number 1 * December 2014
The Library of Congress, Washington, DC * ISSN 2167-0803
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Business Review, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Journal of American Business Review, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 2167-0803 issued by the Library of Congress, Washington, DC. The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. No Manuscript Will Be Accepted Without the Required Format. All Manuscripts Should Be Professionally Proofread Before the Submission. You can use www.editavenue.com for professional proofreading / editing etc...
The Journal of American Business Review, Cambridge is published two times a year, Summer and December. The e-mail: firstname.lastname@example.org; Website, www.jaabc.com Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright: All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, including photocopying and recording, or by any information storage and retrieval system, without the written permission of JAABC journals. You are hereby notified that any disclosure, copying, distribution or use of any information (text; pictures; tables. etc..) from this web site or any other linked web pages is strictly prohibited. Request permission / Purchase article (s): email@example.com
Copyright 2000-2019 All Rights Reserved
The Economic Impacts of Unmanned Aircraft Systems in California
Dr. Javad Gorjidooz, Embry-Riddle Aeronautical University, Prescott, AZ
Dr. Cindy Greenman, Embry-Riddle Aeronautical University, Prescott, AZ
Unmanned Aircraft Systems (UAS) have been long in operations but with the drastic improvement in reliability, speed, and efficiency of information processing the UAS have truly become a very popular tool for a widespread applications. As a consequence, the global UAS market is anticipated to reach about $50 billion by 2018, as military, civil and commercial applications continue to develop (1). Apart from the benefits of lower acquisition and operational costs, unmanned aircraft systems also eliminate the need for an onboard crew making it more operationally advantageous. The most widely recognized Unmanned Aerial Vehicle to the general public is the General Atomics Predator first introduced in 1995 (2). Originally developed as an observation aircraft, the Predator is able to fire on enemies guided remotely by soldiers far away from any potential harm. The Civilian applications of UAS can be grouped into four major categories: commercial, civil, security and scientific. There are growing indicators that different industries are looking for an even greater variety of areas to deploy them. Unmanned Aircraft Systems have been adapted for civilian uses at a slower rate, but the combination of greater flexibility, lower capital and operating costs could allow UAS to be used in agriculture, oil and gas explorations, border security, disaster management, law enforcement, telecommunication, weather monitoring, aerial imaging/mapping, television news coverage, and airing sport events. Currently, the main impediment of commercial and civil development of the Unmanned Aircraft Systems is the lack of regulatory apparatus and privacy issues. The expected outcome of this research is to have a predictive model that takes into account all data in determining the economic impacts of UAS. This Study introduces Markov Input-Output Cluster Analysis and IMPLAN Software as an integral part of the economic impact analysis (EIA). California is the number one state in the U.S. in terms of anticipated expenditures in UAS. The expected outcome of this research is to determine the total economic impacts (in terms of job creation and economic growth) of UAS in the State of California once Unmanned Aircraft Systems are integrated into the National Airspace System. In the event that these regulations are delayed or not enacted, this study also estimates the jobs and financial opportunity lost to the economy because of this inaction. Although this study is specific to the State of California, but UAS is a global phenomenon and researches around the world are interested in commercialization of UAS and its economic benefits to local communities. This research can also be expanded to determine the total economic impacts of UAS on all states in the U.S. using the IMPLAN multipliers for each state. In addition, this research can be used as a tool to determine total economic impacts in any country using multiplier adjustment factors. Unmanned Aircraft Systems (UAS) have been long in operations but reached new heights recently as the Federal Aviation Administration (FAA) is required by a bill that passed in congress in 2012 and signed into law by President Obama to integrate the UAS into National Airspace System (NAS) by 2015. As a result of this bill and the drastic improvement in reliability, the speed, and efficiency of information processing, the UAS market has grown significantly. The global UAS market is anticipated to reach about $50 billion by 2018, as military, civil and commercial applications continue to develop. (3) In addition, the UAS is being considered as a cost efficient mode of transportation for delivery system by army and commercial package delivery system. In fact, the first combat cargo delivery by an unmanned aircraft system took place in 2011 to a U.S. Marines station in Afghanistan. The U.S. Army currently seeking proposals for a UAS platform to pick up wounded soldiers and evacuate them from the war field. According to Teal Group and Forecast International predictions military application of unmanned aerial vehicles (UAVs) will comprised about 18,000 units valued at $12.5-$13.6 billion by 2015. The most widely recognized Unmanned Aerial Vehicle to the general public is the General Atomics Predator first introduced in 1995. (4) Originally developed as a reconnaissance and observation aircraft the Predator has been at the center of controversy as world and US public opinion debate the role it has been adapted to, that of an offensive craft, able to fire on enemies guided remotely by soldiers far away from any potential harm. The civilian applications of UASs can be grouped into four major categories: commercial, civil, security and scientific. (5) There are growing indicators that different industries are looking for an even greater variety of areas to deploy them. Unmanned Aircraft Systems have been slower in adoption for civilian uses, but the combination of greater flexibility, lower capital and operating costs could allow the UAS to be disruptive technologies in fields as diverse as urban infrastructure management, to farming, and to oil and gas exploration. Currently, the main impediment of commercial and civil development of the Unmanned Aircraft System is the lack of regulatory structure and privacy issues. There has been tremendous growth in demand for unmanned aircraft systems (UASs) ranging from military to civilian usage. The Environmental Protection Agency has been using unmanned aircraft to monitor compliance of animal waste runoff regulations in Nebraska and Iowa. The concentrated numbers of these animals near frequently contaminated watersheds led the EPA to begin doing flyovers to see if waste was running off into rivers, lakes, or streams. This has led to complaints about privacy and due process, although aerial surveillance to monitor regulatory compliance has been considered to pass constitutional muster before. The use of Unmanned Aerial Vehicles (UAVs) in border security missions around the world is also expanding rapidly. (6) Now a day, UAVs can have longer operational duration and require less maintenance. UAVs can be operated remotely, using more fuel efficient technologies, with the minimum of human intervention and supervision. These aircraft can be deployed in a number of different terrains and may be less dependent on prepared runways. Some argue that the use of UAS in the future will be a more responsible approach to certain airspace operations from an environmental, ecological and human risk perspective.
Bankruptcy Fraud: Has the Sarbanes-Oxley Law or Dodd-Frank Act Reduced It's Occurance?
Dr. Michael Ulinski, Professor, Pace University, Pleasantville, NY
Dr. Roy Girasa, Professor, Pace University, Pleasantville, NY
The researchers review in an exploratory study the effects of the Sarbanes-Oxley and Dodd-Frank Act on recent fraud cases involving bankruptcy. Statutory provisions governing bankruptcy fraud and examples of fraudulent schemes are described. Recent cases are examined and bankruptcy predictive models are explored. Stakeholder's adverse effects are posed through reorganization impediments of entities that try to emerge out of bankruptcy, but because of the wasting away or outright theft of assets are unable to emerge out of bankruptcy. Conclusions are reached and recommendations for further study are suggested. The Enron Corporation, an energy company with its main headquarters in Houston, Texas, is the poster company of alleged fraudulent concealment of indebtedness and wrongful conduct of its senior management that resulted in one of the largest bankruptcy filing in U.S. history. With its complex financial statements including the use of special purpose entities that enabled it to conceal sizable indebtedness, its demise led to convictions of its major officials the dissolution of one of the top five global accounting firms. Its demise and that of WorldCom and other corporate entities that engaged in fraudulent conduct brought about the passage of the Sarbanes-Oxley Act and later the Dodd-Frank Act which endeavored to prevent such conduct in the future. In this paper we will discuss fraud particularly in connection with bankruptcy filings and the possibility of reemergence of a corporate entity after bankruptcy. Bankruptcy fraudsters will use tactics such as financial statement fraud to cover criminal activities such as hiding assets in bankruptcy to mail fraud and money laundering. Some crimes management may become involved in also include tax evasion and possession of stolen property. Besides the heinous nature of these crimes, the corporate entity will have a lesser chance to emerge out of bankruptcy. The Sarbanes-Oxley Act(SOX) and in particular section 802 provides guidance for internal and external auditors as well as investigators of fraud and provides penalties for certain fraudulent activities. “Whoever knowingly alters, destroys, mutilates, conceals, covers up, falsifies, or makes a false entry in any record, document, or tangible object with the intent to impede, obstruct, or influence the investigation or proper administration of any matter within the jurisdiction of any department or agency of the United States or any case filed under title 11, or in relation to or contemplation of any such matter or case, shall be fined under this title, imprisoned not more than 20 years, or both”(Sarbanes-Oxley Act, 2002). Likewise, sponsors of the Dodd-Frank Act had similar aspirations. Bryan concludes “managers engage in fraudulent financial reporting for two primary reasons. The first reason would be manager's personal gain, such as reporting better results before stock issuances, reporting results in line with analyst forecasts (or prior years) to avoid decline in stock prices, and manipulating results to increase managers' bonuses, stock options, etc. The second reason is to mask financial distress in order to avoid going concern modifications of audit opinions, debt covenant violations, and bankruptcy.”(Bryan, Janes and Tiras 2014) Common law fraud is the intentional or grossly negligent false representation or concealment of a material fact in order to induce another person to rely on the representation to act or omit to act thereby causing injury as a result therefrom. Bankruptcy fraud is defined by several provisions of federal bankruptcy law (Sec.152,2014). provides for imprisonment of up to five years and/or fine for knowingly and fraudulently causing the concealment of assets, false oaths and claims, and bribery. It includes concealment of assets from a custodian, trustee, marshal, or other officer of the court of any property belonging to the estate of a debtor; making a false oath, account, declaration, claim against the estate, receipt of property to defeat the lawful entitlement by others, and transfer of property. Additional punitive sections of the Bankruptcy Code include embezzlement of property belonging to the estate of the bankrupt person, unlawful receipt of property from the estate, and unlawful fee arrangement to conceal the said property (Sec 152, 2014). §157 specifically relates to “Bankruptcy Fraud.” It provides that “A person who, having devised or intending to devise a scheme or artifice to defraud and for the purpose of executing or concealing such a scheme or artifice or attempting to do so-“ files a petition, document, or makes a false or fraudulent representation in bankruptcy including a fraudulent involuntary petition. Such person becomes subject to imprisonment of up to five years and/or fine. There are additional statutory provisions that may be used in prosecuting fraudulent bankruptcy filings. They include mail fraud which prohibits the use of the mails to make misrepresentations to victims, wire fraud which prohibits the use of interstate wires to make misrepresentations to victims;, money laundering which prohibits transfer of funds to an insider’s account to purchase other assets available to the insider, and tax fraud by the failure to pay taxes on assets concealed and related offenses (26 U.S.C. §§ 7202, 7203, 7206, and 7212). Note that third parties in addition to the debtor may be liable for bankruptcy fraud by knowingly concealing property from the trustee or other court officials such as transferring money to personal accounts, back dating receipts, or destroying documents related to the business’s financial affairs. (Swartz, 2014) Corporate fraudulent financial reporting. Prior to the passage of the Sarbanes-Oxley Act and the Dodd-Frank Act, companies often fraudulently filed statements with the SEC misstating assets and revenues. Thus, some 350 alleged accounting fraud cases were investigated by the Securities and Exchange Commission (SEC) over a period of 1998-2007 whereby financial fraud was uncovered having a median fraudulent amount of $12.1 million of median companies having assets and revenues unjust under $100 million. Many of the companies later were compelled to resort to bankruptcy filings.(Accounting Today, 2010)
Dr. Gordon Arbogast, Professor, Jacksonville University, FL
Sharon Van Den Heuvel, Jacksonville University, FL
COMPANY X a non-profit community owned utility in Florida providing electric, water and sewer services to the residents of a major Florida city and surrounding area. The focus of this paper is on identifying factors that influence demand for water so that COMPANY X (Co X) may better manage water withdrawals from the Floridian aquifer. This paper explores the need for water conservation to help Co X meet several strategic objectives, namely (1) improved environmental stewardship, (2) minimizing the threat of regulation violations, (3) improved Community Service Responsibility (CSR) activities, and (4) improved customer satisfaction. The research also identifies and determines a “best” multivariate model to help explain variation in water withdrawals. The final model developed explains 58.6% of the variation of the water volume withdrawn from the aquifer. This regression equation is as follows: Prod Gallons = -7.90E08 +29,656,333(Average Temperature) –89,421584(Unemployment Rate) +15492636(CPI).The paper concludes with recommendations for further research into additional independent variables as well as to differentiate between commercial and residential water consumption to help COMPANY X focus on programs to conserve water to meet its strategic objectives. COMPANY X is a non-profit community owned utility in Florida providing electric, water and sewer services to the residents of a major Florida city and surrounding areas. This utility serves an estimated 420, 000 electric, 305, 000 water and 230, 000 sewer customers in the Florida area that it serves. The focus of this paper is on Co X’s Water Division. In particular it identifies factors that influence demand for water consumption so that Co X may manage water withdrawals from the Floridian aquifer to serve the water needs of their community. It should be noted that COMPANY X is a non-profit firm and not in the business of making money, but rather in doing the right thing for the community (Corporate Social Responsibility or CSR). COMPANY X ’s strategic focus on management of the withdrawals from the Floridian Aquifer stems from several factors, starting with COMPANY X ’s Vision of being the “Best Utility Service Provider in the Nation”. To achieve this vision, COMPANY X ’s has established out several key objectives that directly tie into the need for this research: “Leverage and preserve our expertise in asset deployment, operational excellence, environmental stewardship and financial management to lead a cultural transformation, passionately focused on winning customers’ loyalty, strategically valuing all stakeholder relationships and realizing the untapped potential of all employee” (COMPANY X Board Briefing, January 15, 2013). Environmental stewardship is not surprisingly a strategic focus item for any company in the Utility industry. The external driving force (threat) of regulation makes this a priority for all Utilities. COMPANY X is no exception as it is heavily regulated in terms of their environmental footprint. An external analysis of the utility’s environment confirmed that this threat remains a high priority for the entire utility industry as well. This is reflected in the mega-trend of the Water/Electric Nexus (Black and Veach Study, 2012). This trend shows that water scarcity is the #1 concern for Electric companies. Specifically, electricity costs are the #1 cost of water production – it takes much energy to produce water, accounting for 85% of the total cost Since COMPANY X is both water and electric company the impact is twofold. Firstly, water conservation is key in order to sustain the electric side of the business and secondly, a reduction in water volume produced (conserving water) reduces costs for the water side of the business. There are additional savings in that (1) less chemical treatment is required; (2) there is less wear and tear on infrastructure, (3) a reduced energy footprint has its own cost savings – all leading to reduce costs and reduced rates for COMPANY X consumers. Winning customer loyalty and water conservation are not only linked through reduced rates for customers but also through Corporate Social Responsibility (CSR). In the recent analysis of the JD Power Utility Customer Satisfaction survey that was done on utilities across the nation, the COMPANY X Black Belt department showed a direct correlation between Customer Satisfaction (loyalty) to CSR. Over the past few decades CSR continues to play an increasingly important role in corporate sustainability. This is especially true for utilities as they are by their very nature intimately tied to environmental concerns, as well as attracting the attention of city, state and federal government agencies. The recent correlation showed that customers who were aware of the utility’s CSR activities rated the utility higher than those who were not aware, making it increasingly important for COMPANY X to promote CSR activities such as water conservation (Johnson, 2013). This need is further magnified by the fact that COMPANY X did not fare well at all on the JD Power survey when compared to other utilities. This identified weakness is why Customer Satisfaction is the #1 strategic focus item for the COMPANY X CEO. This COMPANY X CEO was appointed in 2012. Since that time he has been in the process of developing his strategic plan. To date the review of the current situation and both external and internal environments has shown that water conservation plays a key role in several of the SWOT items identified, and as such will play a key role in the strategic plan. Specific objectives have yet to be determined, but in order to set goals COMPANY X needs to begin to understand the drivers impacting demands for water before setting specific objectives. The CEO’s leadership is shifting COMPANY X to be more externally focused and to adopt a classic Mission Culture. This research is intended to provide insight into (1) opportunities to improve environmental stewardship and CSR activities; (2) how to minimize the threat of regulation violations, (3) and how to improve customer satisfaction. Water conservation is very much at the center of strategic planning at COMPANY X ..
Dr. Heather L. Garten, Embry-Riddle Aeronautical UniversityWorldwide
There has been a large movement over the past ten years to describe, both qualitatively and quantitatively, the economics of the current diverse generation. Often this generation is presented as the XY-Generation, and is generally identified with those born since the 1960s. Research across all modalities is constantly contradicting itself on the characteristics of the XY-Generation, as this group of individuals is dynamic and thus impossible to categorize. Once this fact is recognized, leaders can begin to understand economic advancements without relying on the current leading indicators that are no longer applicable. This paper focuses on providing a better understanding of the existing economic trend of the XY-Generation through a carefully crafted analogy of the Middle Ages, an examination of case studies focusing on the current economic indicators, and suggestions for current leadership to empower this highly innovative generation. Historians have and continue to categorize time segments in history: From the great Roman Empire to the rise of democracy, the Dark Ages to Romanticism, historians use a wide variety of ways to classify time periods. Most often these classifications are aligned with economic and cultural ties, such as with the Dark Ages when there was very little cultural and economic progression. Or so is often accepted as truth. The Dark Ages, or the Middle Ages, began when the Roman Empire fell to the Visigoths, a group of Germans, in 410AD. The period lasts until the fifteenth century, when the Renaissance began to take shape across Europe. When the Roman Empire fell, the once strong unification of Europe also fell (Columbia, 2013). The Roman Empire, led by Augustus, was formed following the assassination of Julius Caesar in 44BC. Garnering strength from its law and military power, Augustus successfully expanded the Roman Empire to Asia Minor, Syria, Egypt, and the North African Coast before his death in 14AD. The Roman emperor Trajan further extended the Roman Empire to North Britain, The Black Sea, and Mesopotamia by 117AD. Rome became a city of wealth, depicted through its grand monuments and approximately one million inhabitants. However, in the fifth century German pressures along with those from the Persians succeeded in destroying the Roman Empire (Roman, 2008). Given the contrasting perceived cultural successes of the Roman Empire, the following time period, the Middle Ages, was given the name of the Dark Ages. During this time, much of Europe was pocketed with small, diverse political units, vastly different from the unified Roman rule. Moreover, these sectors had their own cultural identifiers, making it difficult for historians to classify the Middle Ages as with a single identifier. As the Middle Ages progressed into the eighth century, Christianity dominated the region as did feudalism, a landholding system that contained a division of classes. Eventually the Christian Church became the sole ruling institution during the latter or High Middle Ages. This time period is infamous for the crusades, holy wars waged by European Christians to regain control of the Holy Land from the Muslims (Columbia, 2013). Scholars are finding that the Middle Ages were not dark or bleak, and the term ‘dark’ does not accurately describe the once overlooked and unknown achievements of this historical time period. As early as 1969, A.R. Bridbury wrote a charismatic and emotional testimony to academia’s misclassification of the Middle Ages as ‘dark:’ Great scholars, like other great men, cast long shadows. We pay heavily for their insights. Their thoughts set in a mould which becomes a prison for lesser men; and subsequent work done on problems transfigured by their gifts look more like exegesis than a voyaging through strange seas of thought alone. The economic history of Europe in the early Middle Ages is just such a mould…Pirenne held the world in thrall for half a century. He contended that Europe’s essential characteristic, until the age of Islam, was its dependence on the Mediterranean (Bridbury, 1969). Bridbury’s passionate stance, revolving around scholar’s religious claims of the Middle Ages, accurately sets the stage for what historians are now discovering about this not-so-dark time. A recent article in Forbes describes how researchers are now discovering a wide array of advancements that took place in the Middle Ages, more specifically the early Middle Ages (Dorminey, 2014). Given the collapse of the strong, ruling power of Rome, people of the early Middle Ages were faced with a great challenge, and had no outside influences of leadership to guide them through this challenge. Similar to Bridbury’s passionate rebuttal of Pirenne, a scholar by the name of Petrarch led the idea that the classified the Middle Ages in Europe as the Dark Ages (Dorminey, 2014). Although past historians classify advancement by a limited definition of technology, scholars such as Benjamin Hudson at the University of Pennsylvania recognize the advances in the early Middle Ages that are just as laudable as standard technological achievements. The Roman Empire depended on slave labor, and thus did not have a pressing demand to improve technology that minimized the need for human effort (or in pure physics term, work). Thus when the Roman Empire fell and slave labor decreased, former Roman citizens had to seek other means to produce the goods they needed. In Scandinavia, metal refining techniques themselves were refined. In the tenth century the windmill was created as a renewable energy source in Northern Europe, as during the winter months inhabitants’ energy source, water, was frozen (Dorminey, 2014).
Inflation Expectations in United States: Economic, Politic, and Sentiment Factors
Dr. Doina Vlad, Seton Hill University, Greensburg, PA
This paper investigates the nature of association between inflation expectations and some economic, political, and sentiment factors in United States. One result revealed that market participants hold too much of their assets in cash, savings accounts, and money market mutual funds. This finding signals the need for more financial literacy for investors and households to understand the negative consequences of inflation acting as a hidden tax on purchasing power and make them take action and diversify their investments among different asset classes. A second finding pointed out the highly volatile currency environment that pushes investors against holding US dollar. In this context of rising currency volatility worldwide, market participants are left guessing which direction the foreign exchange market is heading and which currency(s) will be the future leaders. Since the most recent recession started late 2007, there have been lots of predictions on possible inflation, and even hyperinflation forecasts for United Sates in the near future. For a period of few years after Federal Reserve System (FED) first introduced the quantitative easing measures, a number of contradictory reactions were noted, since this aggressively introduced tool was not previously tested. Articles and books poured into the market on the subject of massive future inflation in the United States, predicting the collapse of the dollar and the end of the dollar as the world reserve currency, and many similar “doom” scenarios for the US economy. As Turk and Rubino mentioned in their book (2013) “if it was still calculated by the pre-1980 method, the Consumer Price Index (CPI) would have risen at near-double-digit rates over the past few years rather than the 2-to-3 percent that was reported.” On the inflationary expectations, they mentioned that people acting on their fear of higher prices in the future, might actually produce an increase in prices today through their excessive demand for some goods with the intention of stacking up before the prices rise. The blame falls on the fiat currency status of today’s currency not backed up by gold (or other real assets) and a decrease in people’s trust on such forms of money. Market participants who realize the true nature of inflation as a hidden tax that confiscates value or purchasing power try to hedge against losing values by reorientation towards more tangible assets that will retain value in case of inflation. There is an extreme view about the United States’ imminent trajectory into hyperinflation (Schiff and Faber, 2014) due to recent massive money printing that took place, and the government’s huge debt level. If the hyperinflation will take place, they point out, the US dollar will crash and the financial system will be destroyed. In time, after such doom predictions about inflation didn’t come true, the force of the inflation expectations decreased for a while, and then increased again during the debt-ceiling debates and after the FED announced the most recent unlimited quantitative easing phase. Making things even more complicated, a new digital currency entered the market in 2008: the Bitcoin. This new virtual currency can only be created by “mining”, which involves solving very complex algorithms, thus eliminating the possibility of too much crypto-money flooding the market. The currency can be stored in virtual “wallets” in the internet-based cloud, or on the computer. The Bitcoin supply has grown constantly since 2008, but with high swings in volatility associated with the exchange rates between the Bitcoin and real currencies. Although a risky and untested new currency, investors from different countries have entered the game of mining, storing, and using this new medium of exchange. This fact might be interpreted as a signal of loosing faith in the existing fiat currencies around the world and an increase in willingness of taking on higher levels of risk by betting on going into a different currency direction in the future. On the other hand, the opposing views on inflation expectations have responded that the United States should not be concerned with inflation while its economy still struggles with relatively high level of unemployment (Fuhrer, 2011). Although the United States has a sluggish economy, it represents a better alternative than the current uncertainties of Europe and its Euro problems, or the Middle East and its major political unrest. In light of this background controversy, this paper looks into the most recent expectations on inflation in United States in relation to economic, politic, and sentiment factors. More precisely, the paper attempts to answer the following questions: 1. Is the fear of future inflation a factor that affects decisions made today in United Sates? 2. If so, which factors are the most important ones in relation to inflation’s expectations for the future? 3. Finally, if inflation expectations are prevalent today, what are the consequences on decisions made in the economy today, as an attempt to hedge against negative effects of inflation in the future? The remainder of the paper is organized as follows: section 2 explains the theoretical framework, section 3 goes over the empirical results from the research, and section 4 points out some conclusions and recommendations. Like any other expectations, inflation expectations can be generated not only by the rational type of economic, politic, and social factors, but also by less than rational factors such as sentiment, fears, and individual perceptions. On the rational factors, one can look back into history and find examples of inflation and hyperinflation episodes linked with massive money printing by the monetary authorities in different countries; since the FED has been pursuing such a policy through its rounds of quantitative easing for more than five years now, some investors are weary about the possibility of high inflation in the United States in the future. The less than rational factors, or perceptions on inflation expectations, can be generated not only by economic factors, but also by social, psychological, political, and other type of considerations, so it is a challenge to capture their effects when using the standard economic models. This paper looks into two factors and their influence on inflation expectations in United States: political uncertainty and consumer’s sentiment.
Financial Reporting of Other Postemployment Benefits—Towards More Transparency
Dr. Chunyan Li, Pace University, NY
Dr. Roberta Cable, Pace University, NY
Patricia Healy, Pace University, NY
Governments have seen extensive changes in the financial reporting of Other Postemployment Benefits (OPEB). The Governmental Accounting Standards Board (GASB) recently issued two new Exposure Drafts to further improve transparency and comparability. New GASB OPEB accounting standards will be adopted after the Exposure Drafts are approved. The purpose of this paper is to shed light on past and current reporting as well as how significantly GASB will change the way governments report their OPEB liabilities in the future through a comprehensive review of 12 U.S. counties’ reporting of OPEB obligations and unfunded liabilities from 2006 to 2013. Our research analyzes the stages of OPEB reporting from Pay-as-You-Go, to recognition of incremental net OPEB obligations, to requiring essentially all unfunded actuarially accrued liabilities to be recognized on governmental financial statements. We found inconsistencies in the 12 counties’ current use of discount rates and other actuarial methods, supporting GASB’s claims that the new standards will make governmental financial statements more comparable. Moreover, by examining current disclosures, we documented the likely magnitude of the increase in liabilities. Our study revealed, after the new GASB standards are implemented, reported OPEB liabilities may increase a sizable six times the amount that are currently disclosed. Although the impact of pension costs on municipal budgets and taxpayers has received a great deal of attention (Cable and Healy, 2011), the important issue of Other Postemployment Benefits (OPEB) often has been overlooked. Governmental Accounting Standards Board (GASB) Chair, David Vaudt stated, “OPEB – which consists of mainly health care benefits – represents a very significant liability for many state and local governments, one that is magnified because relatively few governments have set aside any assets to pay for those benefits” (GASB, 2014, p. 1). Because of this, Eyre observed that many of these liabilities have grown significantly over time (Eyre, 2013). A Pew Charitable Trust report presented an estimate of the funding shortfall. This report found that in addition to more than $750 billion in unfunded pension obligations, states had more than $625 billion unfunded heath care obligations, which make up the majority of OPEB promises (Tysiac, 2014). Pew also studied the thirty largest municipalities and found that they had $225 billion in unfunded liabilities - $121 billion in pensions and $104 billion in retiree and other non-pension benefits (Gilroy, 2013). David Vaudt stated, “It is vital, therefore, that taxpayers, policymakers, bond analysts, and others receive more and better information about these benefits so that they can better assess the financial obligations and annual costs related to the promise to provide OPEB” (GASB, 2014. p. 1). As a result, in May 2014, GASB issued two new Exposure Drafts. These Drafts are the first step in approving new accounting standards aimed to improve the current ones relating to financial reporting of OPEB by state and local governments. The first Exposure Draft, Accounting and Financial Reporting for Postemployment Benefits Other Than Pensions (OPEB Employer Exposure Draft), proposes guidance for reporting by governments that provide OPEB to their employees and for governments that finance OPEB for employees of other governments. The second Exposure Draft related to OPEB, Financial Reporting for Postemployment Benefit Plans Other Than Pension Plans (OPEB Plan Exposure Draft), addresses the reporting by the OPEB plans that administer those benefits. Stakeholders were asked to provide their comments in late August 2014, and GASB will host public hearings on the Exposure Drafts in September 2014. New OPEB accounting standards will be approved and become effective in the fiscal year beginning December 15, 2015 for plan requirements and December 15, 2016 for employer requirements. David Vaudt stated, “These proposed standards will usher in for OPEB the same fundamental improvements in accounting and financial reporting approved by the Board in 2012 for pensions” (GASB, 2014, p. 1). The purpose of this paper is to shed light on past and current reporting as well as how significantly governmental accounting will change the way governments report OPEB liabilities in the future through a comprehensive review of 12 U.S. counties’ reporting of OPEB obligations and unfunded liabilities from 2006 to 2013. This paper analyzes the stages of OPEB reporting from Pay-as-You-Go, to recognition of incremental net OPEB obligations, to requiring essentially all unfunded actuarially accrued liabilities to be recognized on governmental financial statements. Two objectives of the new GASB Exposure Drafts include improving comparability and transparency. Our study found inconsistencies in the 12 U.S. counties’ current use of discount rates and other actuarial methods, supporting GASB’s claims that the new standards will make governmental financial statements more comparable. Moreover, by examining current disclosures, we documented the likely magnitude of the increase in OPEB liabilities. Prior to 2004, when GASB 43 and 45 were issued, OPEB expenses were reported on a Pay-as-You-Go basis. Accrual accounting for those expenses was not required. GASB 12 required descriptive disclosure information, including the annual expense amount. GASB 26 added additional schedules about the expenses and funding status of the plans. GASB 43 and GASB 45 were major updates in reporting requirements. Following the general principles of government pension reporting at that time, GASB 43 and GASB 45 set forth standards to address how governments should determine, account for, and report both the annual cost and the outstanding financial obligations relating to OPEB. GASB 43 requires state and local governments to issue a Statement of Plan Net Assets and a Statement of Changes in Plan Assets and multiyear schedules in the notes to the financial statements. The first statement provides all the information about the composition of the assets, liabilities and net assets of the plan held in trusts for OPEB while the second statement tracks the year-to-year changes in plan net assets. The plans must report the current fund status as well as the most recent actuarial valuation date. GASB 43 requires that a defined benefit OPEB plan obtain an actuarial valuation (GASB, 2004). GASB 57 amends this requirement. This Standard states that the fund valuation can be satisfied for an agent multiple-employer OPEB plan by reporting an aggregation of results of actuarial valuations of the individual-employer OPEB plans or measurements resulting from the use of the alternative measurement method for individual-employer OPEB plans that are eligible (GASB, 2009).
Moments of Shared Sensemaking Within Market-Focused Strategic Planning Meetings
Dr. David Allbright, Eastern Michigan University, Ypsilanti, MI
Strategy theorists assert that long-term survival of an organization within a turbulent, ever-changing, hyper-competitive and global economy requires the firm to possess an advanced capacity for rapid and continuous learning about markets. Perhaps the ultimate source of sustainable competitive advantage is the development of a superior capacity to learn about evolving market trends and translate what is learned into strategic action more quickly, more continuously and more effectively than one's competitors (Day, 1991, 1992, 1994). This article outlines a normative process model of verbal collaboration that ideally would be pursued within strategy development meetings in which a group of organizational actors would endeavor to serve as the "collective mind" of their firm in order to "see" and interpret the meaning of selected marketplace circumstances. The process model of strategic dialogue visually deconstructs strategy meetings into six ideal "moments" of shared attention that would be pursued by the group while it vacillates between sensemaking and action planning endeavors. Strategic planning meetings provide an excellent forum and opportunity for collaborative organizational learning and transformational change (deGues, 1988). This article spotlights a vital learning component often overlooked within organizational planning and strategy development endeavors -- collaborative sensemaking dialogues. Organizational sensemaking (Weick, 1995) involves collaborations in which a group of organizational actors attempt to interpret market intelligence, and simultaneously construct new meanings, beliefs and mental models (Senge, 1992) regarding their firm's relationship with its environment. Essentially, a group of organizational strategists may collaborate for the purpose of acting as the collective "mind" of their firm. During strategic planning meetings, discussants often try to make sense of confusing, complex and equivocal marketplace events. Participants can choose to openly share their respective mental models in order to reflect on what each "sees" and believes to be the firm's current market circumstances. And, discussants may share their respective preferences for their firm to pursue selected action plans toward enacting future desired marketplace change (Giota and Chittipeddi, 1991). But, what are the actual steps required to surface and share various individual understandings of market reality and then forge them into a collective organizational mind? It would be helpful to more clearly envision the process of shared sensemaking by which a group would surface and reflect upon disparate beliefs and conflicting desires regarding a firm's strategic circumstance. How might varied beliefs and desires be reconciled into a collective viewpoint that could serve to inform and motivate strategic organizational action? We offer a normative process model of collaborative sensemaking and action planning (see Figure 1) which visually illustrates selected moments of strategic dialogues when a group of organizational actors ideally would attempt to forge a collective understanding of the marketplace reality faced by their firm. We outline important moments when dialogue participants should be encouraged to openly share, construct, critique and update their beliefs about their firm's market reality. We suggest that collaborative learning and shared sensemaking may require both rationally "cool" and emotionally "hot" discussions to facilitate "open" and candid critiques of an articulated vision of reality. Furthermore, we illustrate how strategic dialogue often involves a vacillation between two distinct yet complementary modes of dialogue: sensemaking and action planning. Toward that end, we survey selected relevant literatures that provide theoretical concepts which later are integrated into a process model of strategic dialogue. Organizational strategy theorists call for researchers to gain a better understanding of how to build a market-oriented learning organization (Slater and Narver, 1995) that would have an advanced capacity for acquiring and disseminating market intelligence throughout the firm in order to promote a rapid, organization-wide strategic response to that intelligence (Kohli and Jaworski, 1990). But, that same literature admits that little is known about the social cognition process of shared interpretation that a group of organizational actors would utilize to construct their collective vision of marketplace reality. In particular, an important learning element “often overlooked in many strategic planning systems is the development of a [collaborative] process for the critical assessment of key assumptions about the business and its environment” (Slater and Narver, 1995; p. 70). Similarly, Sinkula (1994) advises organizational researchers to clarify the “market information processing” and shared “sensemaking mechanisms” which groups of organizational actors use to assign meaning to equivocal and complex market events and trends. Collecting and disseminating data from the market environment is a relatively straightforward activity. The more difficult endeavor is the subsequent act of interpreting and constructing meaning from incoming marketplace intelligence. Raw data needs first to be absorbed, reflected upon and interpreted by each organizational actor. Subsequently, those same actors must translate the meaning of the intelligence into strategically actionable advisements and shared commitments to implement strategic action plans. Therein lies additional complexity... a process of social cognition requiring collaborative efforts to share and translate many individual beliefs held by various organizational actors into a collective viewpoint. Sinkula (1994) reminds us that market intelligence requires a process of shared interpretation in which a group of organizational actors would engage in the construction of a collective understanding of the firm's current strategic positioning in its marketplace. Similarly, Weick (1995) regards shared sensemaking to be an ongoing social cognition process centered on dialogue, interaction, interpretation, meaning construction and improvisational action. Continuous interpretation, sensemaking and meaning construction endeavors are strategically vital to the firm, because they serve to enhance the organization’s capacity for continuous learning and coordinated transformation and change needed to help ensure the firm's long-term survival within a turbulent marketplace (Day, 1992). It has been suggested that open dialogues (Argyris, 1991; Schein, 1993) provide increased opportunities for deep reflection and collaborative learning that may lead ultimately to transformational change for the entire organization. The concept of openness refers to occasions when a group of organizational actors volunteer to candidly articulate, share, clarify, reflect upon, openly critique and/or update their respective individual mental models (Senge, 1992) regarding "what is happening" in their world. As colleagues share openly their individual beliefs and preferences regarding a selected strategic circumstance, then the warrants for their claims of knowing or believing can be examined and critiqued. The group can examine questions such as "What do you believe is going on?.. as well as "Why do you believe that?" And, the reasoning behind pursuing a current strategy may be critiqued.
The Use of the Relationship Between Market Orientation and Business Performance to Determine Marketing Strategy
Dr. Richard Murphy, Jacksonville University, Jacksonville, Florida
Dr. Mary Werner, Jacksonville University, Jacksonville, Florida
The relationship between market orientation and its’ effect on business performance has been established (Murphy et al., 2013). Using this information to develop appropriate marketing strategy for organizations would be very helpful for organizations. This article provides additional information organizations can use to engage in the measurement of market orientation and business performance and discusses the relationship between market orientation and business performance and marketing strategy. This is an extension of and addition to the information presented in Murphy et al. (2013) where the effect of market orientation on business performance was discussed and it was illustrated that market orientation does affect business performance. Organizations that use the information in this article can engage in appropriate approaches to make decisions regarding the formation of marketing strategy to help achieve a positive level of business performance. Market orientation should be incorporated in marketing strategy to help achieve positive business performance. Market orientation or the focus on the satisfaction of the customer on the part of an organization has long been recognized as the foundation upon which marketing is practiced. Although it seems there has been a presumption that marketing orientation would enhance business performance, Murphy et al. (2013) provides importance information to indicate this is generally the case under various circumstances. Murphy et al. (2013) also indicates that there should be further investigation into the various situations and circumstances under which the market orientation and business performance relationship applies so that marketing decision making or the determination of marketing strategy can be made in a more effective way. The purpose of this study is to provide additional information to Murphy et al. (2013) that will more completely and comprehensively enable organizations to determine effective marketing strategy. This will be done by providing additional information regarding the measurement of market orientation and business performance and providing additional information about the market orientation and business performance relationship concerning its’ link to marketing strategy. Most studies in market orientation employ either MKTOR or MARKOR scale (Farrell, 2000; Harris, 2002; Harris and Ogbonna, 2001; Matsuno, Mentzer, and Rentz, 2000). Studies that have adopted the MKTOR scale and approach include Day (1994); Deshpande, Farley, and Webster (1993); Han, Kim and Srivastava (1998); Harris (2001); Langerak (2003b); Pelham and Wilson (1996); and Sargeant and Mohamad (1999). Those adopting the approach and items from MARKOR include Atuahene-Gima (1996); Matsuno and Mentzer (2000); Rose and Shoham (2002); Ruekert (1992); and Siguaw, Simpson, and Baker (1998). While popular in the market orientation literature, researchers have called out significant problems with these two scales and underlying studies, primarily focused on limitation of the scales’ psychometric properties. Key criticisms leveled against MKTOR address face validity. Kohli, Jaworski, and Kumar (1993) contend that MKTOR includes items outside of specific behaviors or activities of market orientation. Siguaw and Diamantopoulos (1995) further contend that the scale items are not completely aligned to the dimensions originally present by Narver and Slater in their propositions. Structural fit is also an issue. Siguaw and Diamantopoulos tested three models of the MKTOR scale—a unidimensional, multidimensional, and multidimensional with a general factor—and found the latter to be best, albeit with no theoretical foundation for the general factor and with results that were “NOT particularly impressive” (p.85). MARKOR also has its detractors. Matsuno, Mentzer and Rentz (2005) argue that the scale “only represents a limited number of stakeholder domains. It captures mostly customers and competitors as focal domains for understanding the market environment and does not explicitly address how other market factors suggested in the literature…may influence competition and customers” (p.528). Structural issues exist with MARKOR as well (Kohli et al., 1993; Matsuno and Mentzer, 2000; Siguaw et al., 1998). MKTOR is found superior to MARKOR when considering convergent and discriminant validity (Pelham, 1997), efficiency (Matsuno et al., 2005), and predictive validity (Matsuno et al., 2005; Oczkowshi and Farrell, 1998). In meta-analysis, the results are mixed: MARKOR is found to have stronger effects on business performance (Ellis, 2006), whereas Rodriguez Cano, Carrillat, and Jaramillo (2004) find higher reliability for MKTOR. Additional concerns leveled on both scales address the fact that each relies exclusively on informants within the firm rather than from an external perspective (Gotteland, Haon, and Gauthier, 2007), employs single rather than multiple informants (Farrell and Oczkowski, 1997), and includes no items that compare the results for the firm to competitors (Harris, 2002). Reflecting both the ongoing debate between the culture-versus behavioral perspective as well as the maturing of the research in each of these areas over the last twenty years, three trends in market orientation conceptualization and research mark the last decade: (a) the integration of the behavioral and cultural perspectives, (b) the extension of organizational environment relevant to market orientation, and (c) extension of the market orientation to include both market-driven and driving markets. Rather than viewing the MKTOR and MARKOR measurement scales of market orientation as competitive or as interchangeable (Deshpande and Farley, 1998), new research reflects the culture and behavioral conceptualizations as complementary and supportive concepts – and, thus, MKTOR and MARKOR as complementary scales. Building upon the “structure-conduct-performance” paradigm of Thorelli (1977), recent research is asserting that “organizations adopt first a cultural orientation and then develop consistent behaviors” (Gonzalez-Benito and Gonzalez-Benito, 2005, p. 700). Matsuno, Mentzer and Rentz (2005) conceive the MKTOR cultural construct as structure and, thus, an antecedent to the conduct construct as depicted by MARKOR. Performance then is the outcome of market-oriented behaviors. However, their results were inconclusive, and note that “it is conceivable that the organization culture articulated by Narver and Slater (1998) may be a necessary but not sufficient condition to market-oriented behaviors” (Matsuno et al., p. 7). Carr and Lopez (2007) build upon the Matsuno, Mentzer and Rentz (2005) model and similarly assume MKTOR as cultural antecedent of market oriented behaviors in multidimensional constructs. Through path analysis, they find empirical support for the influence upon employee responses. Homburg and Pflesser (2000) employ MARKOR to measure market oriented behaviors, but reject MKTOR of market-oriented culture in lieu of a multi-layer set of measures reflecting research by Schein (1992) in which corporate culture is described by the values, norms and artifacts within and organization. As a result, Homburg and Pflesser (2000) show that a market-oriented culture precedes the behaviors characterizing a market-oriented organization.
U.S. Investors Continue to Take Foreign Risks
Dennis C. Stovall, Grand Valley State University
Citizens of the United States have been developing a curiosity about sending their personal investment dollars overseas. New options are available for local investors who are comfortable with electronically sending their money through a United States-based website onto the foreign currency markets. Online trading gives the ability to invest in markets overseas without having to move there. American investors tend to hold investments in both U.S. and non-U.S. stocks. Foreign currency trading has become a hot investment strategy over the last decade and is becoming very popular with individuals. Multiple rates and terms are available for these investments, each with their own potential for risks and rewards. After the navigation of terms, rates, and fees, the investor must always remain mindful of the economic climate affecting the investing country. This topic was presented in 2005. At that time, the popularity of the Eurodollar funds had been growing as the familiarity and stability of the Euro increased. Since 2005, changes in currency trends, economic climate changes, and other factors have caused many changes with foreign currency, which can have an impact on U.S. private investors. Although foreign currency trading has become the latest fad, it is important for investors to be aware of the risks involved with investing online. As foreign currency markets are electronically made available to investors, curiosity grows, but knowledge, tenacity, and risk are imperative to potential reward. For individuals, investing based on perceived currency movements has always been a risky way to seek profits (Opdyke, 2005). Currencies are volatile investments that can rack up losses for everyone. Professional currency traders who have a better understanding of the market are less at risk for experiencing losses. It is difficult for investors to time changes in the market as it relates to currencies. Investors are warned to be aware of different fee structures, which could reduce any investment gains. Technological innovations have provided the opportunity for private investors to easily and conveniently monitor currency markets and trade through intermediaries. This is an attraction for private investors and a way for them to trade foreign exchange currencies. Adding non-dollar denominated assets to a dollar denominated investment portfolio diversifies the portfolio and reduces the risk in the portfolio (NewsCore, 2012). It is critical for investors to always remain mindful of the market. Currencies historically tend to move in sweeping trends, which typically last five to seven years (Opdyke & Karmin, 2004). When presenting this topic in 2005, the dollar was weak relative to the Euro and many investors were sending their money through U.S.-based websites into foreign exchange markets, particularly in the Euro. At the end of 2004, the dollar reached its all time low against the Euro since the debut of the Euro in 1999, and multi-year lows against other major currencies. Since 2005, the dollar has recorded gains against the British pound, Japanese yen, and the Australian dollar (Opdyke, 2005). Now in 2014, the dollar is strong and the Euro is weak because of the Euro debt crisis and other factors. Planners are telling investors to cut back exposure to investments in foreign currencies or reduce their own position in international equities. Currencies can create a strong degree of volatility, making it necessary for investors to maintain a constant watch over the economic climate affecting the Euro. Investors are recommended to consider buying international stocks when the local currency is down. As long as investors remain alert, many planners encourage clients to allocate a portion of their assets among foreign currencies. Varied holdings within the U.S. can balance risks and rewards. Planners recommend investors putting between five to twenty percent of their personal investments in foreign currencies to diversify their overall portfolio. However, foreign currency trading is not for everyone. Since participating in foreign currency is extremely risky, it is not recommended for the older generation to enter into risky investments. Instead, it is more desirable for someone who is in the investing part of his or her life and has time to invest and participate in risky investments (Bake, 2003). In 1999, with the debut of the Euro, the Euro was increasingly gaining strength against other currencies, particularly against the dollar. However, the end of the strong Euro ended in 2010 when Europe faced a debt crisis. Some claim that Europe is not facing a debt crisis, but a currency crisis. Currency crises are one of the most dramatic events in global financial markets (Chiu, Walter, Walton, & Willet, 2009). Crises can vary in their effects and in their causes, but they all have one thing in common: a high degree of exchange market pressure. The currency crisis in Europe is so severe that the future of its existence of the Euro has been in question. Another reason why the Euro is weakening is because of the objective of the European Central Bank (ECB). The objective of the ECB has been inflation targeting and the use of monetary policy to maintain price stability. Their set target for an acceptable inflation rate is below two percent; this is different from the United States Federal Reserve because the Fed is more flexible in establishing an acceptable inflation rate. The automatic clearing system of the Euro also has had an influence on the reason for the weakening Euro. The ECB manages an automatic system to clear debits and credits across the entire monetary zone. This allowed some Euro members to run unlimited balances of payment deficits with other Euro member countries. The social and economic barriers affecting the single European market have also caused a decline in the strength of the Euro. Currently, there are eighteen countries that use the Euro as their currency. Therefore, the ECB must consider the impacts on not only one economy, but to multiple economies when making decisions. Although there has been skepticism of the Euros’ existence for the future, the ECB will not allow the Euro to dissolve. The ECB Chairman said in 2012, “Within our mandate, the ECB is ready to do whatever it takes to preserve the Euro. And believe me, it will be enough” (Schwartz, 2013). Although the ECB is optimistic about the Euro, it is critical for investors to remain mindful of the events happening with the Euro and European economy. Exhibit 1.1 below provides a summary of the reasons for the weakening Euro.
Role of User Perceptions and Attitudes on Facebook Ad Click Behavior: Applying the Fishbein-Ajzen-Attitude-Behavior Model
Dr. Orsay Kucukemiroglu, Professor, Pennsylvania State University at York, PA
Dr. Ali Kara, Professor, Pennsylvania State University at York, PA
Previous research shows that communication using social networking sites such as Facebook is an important medium for many consumers. Considering the significant future growth trends in online social networking and the growing popularity of Facebook provided justification for marketers to shift their advertising strategies through online social networking sites. Accordingly, advertising on social networking sites has attracted significant marketer interest. Understanding consumer’s attitudes and perceptions toward Facebook advertising should be an important goal for advertisers when designing their advertising strategies. Perceptions and attitudes are known to influence consumer behavior and action in the form of positive or negative responses towards the phenomenon. Therefore, the objective of this study is to examine user perceptions and attitudes of advertising on Facebook. Study result offer important implications for advertisers and researchers alike. Online social networking has become a global phenomenon and an integral part of daily lives of many consumers around the world. According to the Pew Research Center Report (Hampton, Goulet, Rainie, and Purcell, 2011), 79% of American adults used the internet and nearly 59% of internet users say they use at least one social networking site. Studies show that more than one third of online participants review services and products, frequently post opinions, and involve with content creation activities (Riegner, 2007). Facebook, a popular social networking site, allows anybody around the world to sign up for free and helps users to easily communicate, follow, and/or participate in real time discussions with anybody else with a Facebook account. Reportedly, 52% of Facebook users and 33% of Twitter users engage with the platform daily. In addition, studies show that the average Facebook user has 229 Facebook friends, 7% of which they have never met (Hampton et al., 2011). The typical Facebook user regularly interacts with their friends by posting messages about very specific aspects of their personal lives to popular/unpopular trending issues and/or their opinions or experiences with products/companies. Typical online consumers talk with their friends in chat rooms or through instant messaging, provide their opinions or feelings about these products on their personal blogs or write product reviews or even in some cases post videos on Youtube. Advertising on social networking sites has attracted significant marketer interest. It is that such form of advertising is effective because their interactive nature and the marketers’ ability to customize their messages to individual user needs. Advertisers could design ads according to their specific target customer needs and reach them via Facebook. In 2013, advertisers spent total of $66 billion on TV ads as compared to $7 billion spent on Facebook alone (Brown, 2014). Some argue about the effectiveness of Facebook ads. They believe that such ads had little or no impact on consumers' purchase behavior. However, spending on online ads is expected to surge by 15%in 2014 alone (Brown, 2014). This clearly indicates the effectiveness of online ads to reach consumers about their services and products. Hence, understanding consumer’s attitudes and perceptions toward Facebook advertising should be an important goal for advertisers when designing their advertising strategies. Without understanding consumers’ perceptions and attitudes of Facebook ads, could lead advertisers to focus on ineffective ads or develop unclear messages that cannot influence buyer behavior or even get noticed. Perceptions and attitudes are known to influence consumer behavior and action in the form of positive or negative responses towards the phenomenon. Therefore, the objective of this study is to examine user perceptions and attitudes of advertising on Facebook. Data for the study was collected from a random sample of 368 Facebook users in South Central Pennsylvania. Questionnaire used in the study included various questions about feelings, perceptions, credibility, effectiveness, and purchase behavior. Online communication has become an important platform for consumers to express their opinions and experiences with the products and services (Brown et al., 2007; Davis and Khazanchi, 2008; Xia and Bechwati, 2008). The influence of word of mouth on consumer decision making is well established (Steffes and Burgee, 2009). Increasingly, consumers look for online product and service reviews by their peers before making purchase decision (Adjei et al., 2009; Zhu and Zhang, 2010) because consumers are more likely to buy a product or services if recommended by their peers (Leskovec et al. 2007; Iyengar et al. 2011). This type of word-of-mouth communication has shown to impact evaluations of consumption experiences (Moore 2012). Researchers argue that social networks have changed the consumer-to-consumer communication and have to become an important marketing tool (Chu & Kim, 2011). It is well supported in the literature that attitudes predict consumer (Fishbein and Ajzen, 1975). Fishbein-Ajzen (1975) argued that consumer behavior may be predicted systematically using the attitudes, intentions, and behavior model (Figure 1). Therefore, the knowledge of consumer attitudes towards Facebook ads could be used to predict consumer behavior of clicking on Facebook ads. Model assumes that consumer intention is a function of their beliefs about the behavior itself. Hence, in this study, we used Fishbein-Azjen model to examine the relationship between consumer attitudes, perceptions, and the ad click behavior. Data for the study was collected from the residents of South Central Pennsylvania. Designed study questionnaire were distributed to local residents with the help of undergraduate students enrolled in a college. Study questionnaire included measures for the constructs and demographic/background questions. Respondents were recruited through networking and they were assured confidentiality. Students who were used as interviewers were offered extra credit towards specific course grades to increase interest and participation in the study. Approximately 400 questionnaires were distributed using personal contacts. Although a convenience sampling procedure was used in the study, we believe that given the size of the sample and widely use of Facebook allows us to test the hypothesized relationships among constructs. Only the users with active Facebook accounts were invited to participate in the study. After a week later, a total of n=368 completed questionnaires were returned and were used for the analyses. Consumer perceptions, ad attitudes and behavior/action questions obtained from the literature used to measure the constructs. Perceptions were measured using a 16-item measurement instrument rated by a 5-point Likert type scale. Additional statements used to measure perceptions and behavior. The measures used to operationalize the hypothesized constructs in the in the conceptual. Both descriptive and relational analyses were used to analyze the data. The gender was approximately equally split in the sample used. Due to the nature of the Facebook users, data was skewed towards the younger age group. To socialize with friends/family was the main reason reported by the subject for using Facebook. Two-thirds of the subjects reported that they had more than 200 Facebook friends. Most of them spend 10 to 30 minutes time on Facebook daily. Three quarters of them used Facebook on a mobile device. We believe that the sample characteristics reported in this study does not significantly differ from the population of Facebook users. We then checked the descriptive measures of ad perceptions and attitudes. An examination of the (Table 1) results show that subjects overall has negative attitudes towards Facebook ads. Study participants agreed with statements that Facebook ads were annoying, irritating, and deceptive. Similarly, they did not agree with the statements that Facebook ads were valuable, enjoyable, or honest. The study of subjects’ perceptions and intentions about Facebook ads (Table 2) shows that they disagreed with the statements about positive information and agreed with the statements regarding negative information. “I wish I had a way to block Facebook ads” received strong agreement while “I frequently pay attention to Facebook ads” received the strong disagreement from the subjects. To examine the dimensionality of the existing attitudes about Facebook ads, we factor analyzed the 16-statements. Table 3 shows the results of factors and their respective loadings. Three rotated factors explained more than 63% of the variation in the data. First factor was labeled as “Trusting” with six items loaded and a Cronbach Alpha of 0.87. Second factor was labeled as “Enjoying” with six items loaded and a Cronbach Alpha of 0.88. Finally, the third factor is labeled as “Irritating” with four items and a Cronbach Alpha of 0.76. Similarly, Table 4 shows the results of factor analysis on perceptions and behavior statement used in the study. Perceptions seemed to have four dimensions explaining more than 62% of the variation in the data. The perception dimensions were labeled as informative, willingness to rely, beneficial, and unwelcome. Three items that were loaded to the behavior factor were related to consumer actions with respect to the ads. They all had very good levels of reliability scores.
Relationship between Grades and Learning Mode
Dr. John C. Griffith, Embry-Riddle Aeronautical University
Dr. Donna Roberts, Embry-Riddle Aeronautical University
Dr. Marian C. Schultz, The University of West Florida, FL
A comparison of failure rates and grade distribution was conducted between four learning disciplines utilized by Embry-Riddle Aeronautical University-Worldwide: Eagle Vision Classroom (synchronous classroom to classroom), Eagle Vision Home (synchronous home to home), Online and traditional classroom learning environments. Researchers examined 20,677 Embry-Riddle end-of-course student grades from the 2012-2013 academic year. Significant relationships between failing grades and learning environment (modes) were noted in courses from the English, Economics and Mathematics disciplines. Online courses experienced more failures relative to other modes of instruction in Humanities, Mathematics and Economics courses. The traditional classroom-learning mode had fewer failures relative to other modes in English, Humanities and Mathematics courses. Grade distribution was significantly different among some of the learning modes in disciplines studied. Due to the continued technological advancements in course delivery, recommendations include continued research on the relationship of student performance and learning mode. Researchers should also conduct quantitative and qualitative research on faculty and student perceptions regarding learning mode preferences. Universities have deployed various types of instruction delivery systems for their students. The quality of instruction and success rates of students has always inferred a concern (Johnson, 2013). As technology continues to advance and students take more courses online and through video synchronous learning modes, the question of how well students learn in these environments solicit professional attention from researchers (Lou, Bernard & Abrami, 2006). To that end, Harstinsk (2008) conducted a meta-analysis of 535 studies that indicated no significant difference in learning outcomes between traditional classroom and online modes of instruction, as measured by grades and examination results. These studies generally compared online instruction with traditional classroom instruction. Video synchronous learning was a relatively small part of online instruction in those comparisons. Dunn (2013) examined 1,600 course grades among four disciplines noting differences in student performance based on learning modes. Her study included two video synchronous learning modes, and subsequently she recommended further research using a larger sample size. Universities are offering a greater number of courses over the Internet in a synchronous mode of instruction, utilizing headsets and webcams along with traditional classroom and online instruction (Foreman & Jenkins, 2005). In light of this continuing shift, this study replicates Dunn’s (2013) earlier work, at least in concept, by examining the relationship between learning mode and student performance through analysis of 20,677 student grades. Distance learning (DL) has been generally defined as “…institution-based, formal education where the learning group is separated, and where interactive telecommunications systems are used to connect learners, resources, and instructors” (Schlosser & Simonson, 2006, p. 1). The United States Distance Learning Association (USDLA) defines Distance Learning as, “…the acquisition of knowledge and skills through mediated information and instruction, encompassing all technologies and other forms of learning at a distance” (Holden & Westfall, 2010, p. 2). While the early iterations of distance learning included correspondence courses and media delivered through the mail system, web-facilitated online learning, or e-learning, has become the popular mode of distance learning delivery. Comparisons between traditional classroom delivery and e-learning modalities indicate that location, content, and personalization represent critical difference elements. Location suggests that e-learning can be accessed virtually anytime and anywhere, whereas traditional classes are dependent upon certain times and locations. E-learning that is distinguished by content indicates that it can be implemented through audio, animation, video, simulation, online resources and communities, whereas traditional classrooms often rely on presentation slides, textbooks, and video. The element of personalization, as associated with e-learning, allows the learning pace and direction to be determined by the user, whereas traditional classrooms typically present one learning path for all students (Burgess & Russell, 2003). With the advent of new web-enhanced network and communication technologies, the delivery of e-learning has expanded to include non-traditional venues that incorporate both synchronous and asynchronous modalities. Asynchronous distance learning refers to delivery modalities that do not require real-time communication between the instructor and students, but instead attempts to bridge the constraints of time and location by incorporating independent self-study and indirect communication tools such as discussion boards, wikis, blogs, e-mail and various multi-media. The USDLA (Holden & Westfall, 2010) cite the following characteristics of asynchronous learning environments: providing for more opportunity for reflective thought; not constrained by either time or location; delayed reinforcement of ideas; providing flexibility in delivery of content; potentially higher attrition rate; and possible extension of time for completion (p. 14). In contrast, synchronous distance learning refers to a modality that incorporates live, real-time, two-way audio and/or visual communications between the instructor and the students through the use of various technologies such as audio response systems, interactive keypad devices that support both the exchange of data and voice; and/or video-conferencing platforms. According to the USDLA (Holden & Westfall, 2010), synchronous learning environments have the following advantages: providing a dialectic learning environment with varying levels of interactivity; encouraging spontaneity of responses; allowing for optimal pacing for best learning retention; allowing for immediate reinforcement of ideas; controlling the length of instruction when completion time is a constraint; and being constrained by time, but not location (p. 14). With the continual advancement of technology and the development of more stable, robust venues online learning, has consistently experienced a steady growth rates. According to Changing Course: Ten Years of Tracking Online Education in the United States (Allen & Seaman, 2013), the tenth annual report on the state of online learning in U.S. higher education, online enrollments have increased dramatically. The number of students who have enrolled in at least one online class has more than quadrupled from 1.6 million in 2002 to 6.7 million as of 2011.
Returns of Human Capital Investment
Daniel Jones, Sam Houston State University, TX
Dr. Balasundram Maniam, Sam Houston State University, TX
Dr. Hadley Leavell, Sam Houston State University, TX
Technological advances have changed the way businesses operate. As the business environment changes employers search for employees who know how to operate complex pieces of equipment and will pay good wages to those who can provide a competitive advantage to the firm. This study looks at why a company should invest in human capital and then goes on to explore the trend of what kind of organizations benefit the most from human capital investment. Research found that service firms and firms with high capital investment benefit the most from investing in human capital. Machinery reduces the chance of human error and keeps the product the same level of quality time after time. However, the research presented in this study has shown that skilled labor is still in high demand and can be used to complement the developments in technology. Technology is getting more and more complex and as a result is rapidly encroaching into aspects of the working environment that were previously thought to be human jobs only. Advanced pattern recognition and complex communication are examples that were once only thought to be able to be done by humans, but now computers have the capabilities to not only compete but excel in these areas. The efficiency and effectiveness that computers create in operations means fewer people are being used in a growing set of tasks, but a more sophisticated skillset is needed by workers to be able to complete the tasks and run more high-tech equipment (Brynjolfsson, 2011). Skilled workers are more needed than unskilled workers, which has been portrayed in the decline of relative earnings for unskilled workers who have an education lower than high school graduate (Koeniger, & Leonardi, 2007). Employees, as well as managers, with technical skills are needed to determine the best process to ensure everything runs smoothly. These kinds of employees are often highly educated and have developed a certain skillset so that they have the requisite knowledge to take control when needed. Having a degree is not uncommon in the US where the graduation rate per capita is one of the highest in the world (247Editors, 2012). However, companies are very aware a formal education is not the only item their employees will need to be productive in a changing production environment. This study looks at human capital investment and the returns that it brings. The objective of this paper is to look at how a firm can match human capital to its needs and considers what a CEO or manager can do to pair human capital with social capital in order to increase returns. The trends of human capital investment will be reviewed with the caveat that firms that are highly invested in capital should also invest in human capital. The final section of the main body of this report looks at the components of human capital investment. This paper will relate how these components impact the returns of a company and what attributes link the components together. Subramaniam and Youndt (2005) investigated how intellectual capital affects the innovative capabilities of an organization. They looked at the relationships between human capital, social capital and how these two sources work together. Their investigation related to the article, How to Invest in Social Capital (2001), which examined the different way that employers can invest in social capital. Carmeli and Schaubroeck (2005) explored the benefits that human capital investment can have on firm performance. They found that human capital investment must be leveraged against the design and strategic orientation of the company in order to obtain benefits of human capital investment. O’Leary, Lindholm, Whitford, and Freeman, (2002) provided a look into the way technology is changing the working environment. They explain the way new assessments are changing the hiring process and the different processes of investing in human capital. Koeniger and Leonardi, (2007) explored wage differentials in the US and Germany. Their research delves into the reason why when hiring costs increase, a firm will switch to fixed capital, while still maintaining reasonable levels of skilled human capital. Berk, Stanton, and Zechner’s, (2010) research on law firms found that the firm’s optimal capital structure depends on the trade-off between human costs and the tax benefits of debt. They found that wage increases with firms that are highly invested in capital. Also wages increase for firms that are highly leveraged. Almeida & Carneiro (2009) researched the returns that come from on-the-job training and off-the-job training. They found that formal job training is a good investment to increase returns. Kessler, and Lülfesmann, (2006) looked at the returns of job training and found that employers will only invest in firm-specific training, not general training, when labor markets are competitive. Kor and Leblebici (2005) looked at the resource-based view of the firm. Their research explored the returns obtained from lateral hiring compared to training junior law associates into employees with a wealth of specific knowledge. Blundell, Dearden, Meghir, and Sianesi, (2005) provided a review on the evidence of the returns to training an employee and their education for the firm and the economy at large. Similarly, Ballot, Fakhfakh, and Taymaz, (2006) looked at who benefited the most from training, whether it be the employer, employee, or future employer. Fedderke (2004) looked at the effect the South African education system has had on the economy’s growth. It was found that the poor educational policies and investment in human capital could have caused the lack in productivity for companies and the slow growth of the economy. Sianesi, and Reenen (2003) explored the returns of investment in education from developed and underdeveloped countries. They found that secondary education and tertiary education produce different levels of benefits and returns. Stewart and Ruckdeschel, (1998) explained how intellectual capital can bring a competitive advantage to an organization. Detail was given about the changing technological environment and how the different knowledge coming from managers and workers is more important now than ever. Exploring technological innovations further, Brynjolfsson, E. (2011) dispelled the theory that increased investments in fixed capital mean reduced investments in labor. The research explored the idea that increases in technology should be paired with skilled labor to increase production capacity and efficiency. Hitt, Biermant, Shimizu, and Kochhar (2001) conducted a study to examine the effects that human capital investment has on professional service firm performance. The results coincide with the resource-based view of the firm and that human capital is important when creating a strategy for competitive advantage. Bontis and Fitz-enz, (2002) researched the financial services industry to find how resources should be put into human capital management.
Insider Trading Direction and Optional Wage Design
Dr. A. Can Inci, Bryant University, Smithfield, RI
This paper uses the insider trading direction as a signal to design an optimal wage contract, where the principal-agent problem due to moral hazard is resolved. Insider trading provides the corporation important information about the action of the manager. It is a tough challenge for the owners of a corporation to identify managers who will work hard and not be overly risk averse in their choices of project opportunities. Observation of the insider trading direction (buy or sell) may help to sort out superior managers. Distinguishing the quality of the managers by following their insider trading activity would help contribute to the design of an effective compensation scheme for those managers as proposed and developed in this paper. Insider purchase activities reward those managers who add value to the firm through their superior effort, better and accurate decisions, proper risk assessments and willingness to take justifiable risks. Such managers will also agree to compensation schemes partially based on their insider trading activity, since their interests will be most closely aligned with the interests of the owners of the firm. The paper presents a sound theoretical framework and develops a model for an optimal wage contract that can be utilized by the owners of the corporation. Insider trading profits are conjectured to have increased in recent years after the implementation of Rule 10b5-1. Therefore, a wage contract incorporating insider trading activity has become more relevant. A vital aspect of the corporate governance of a corporation is the compensation structure of the managers. Executive compensation has been investigated extensively in the finance and economics literature and a detailed review is provided in Core et al. (2003). In this paper, I use insider trading activity to design an optimal managerial contract where the managers can engage in a continuous spectrum of firm actions, while their trading signals appear as a finite set of ‘buy’ or ‘sell’ trade decisions. The principal, or the firm owner, wants the insider-manager as the agent to perform in the best interests of the shareholders and wants to establish a compensation schedule which reflects the agent’s efforts. The problem, as in all agency models, is that the action, i.e., the effort exerted by the manager is not observable. The signal which is utilized by the firm owner in this study is the insider trading activity of the insider. Studies such as Lakonishok and Lee (2001) and Garfinkel et al. (2007) find that information based trades have predictive power. These studies demonstrate the usefulness of the insider trading signal for firm owners in adjusting the insider compensation over time. The agent exerts a certain level of effort and is well aware of the worth and the impact this action will have on the project and for the firm. Hard work and high quality of activity are conjectured to benefit the firm and therefore are considered as positive signals. Anticipating the positive impact on the firm, the insider will engage in a purchase transaction. The transaction will ultimately lead to significant market adjusted profit for the insider. On the other hand, if the insider exerts just enough effort for the assigned responsibilities and does not devote extra energy or effort, this will not have any impact on the firm. The insider will anticipate the neutral performance of the firm and will not trade the shares of the firm. Finally, if the insider completely shirks all the responsibilities and exerts no effort on the assignments, the conjecture is the total failure of the project and the clear negative impact on the firm. In anticipation of this negative outcome, the insider sells shares of the firm. Therefore, the paper addresses the specific issue of an optimal wage structure constructed from insider trading activity. The theoretical framework developed in this paper will demonstrative the possibility of such a compensation schedule. The observable insider trading activity can be used as a signal or as a proxy for the unobservable managerial action, and the principal-agent problem can be resolved. Thus, an effective compensation scheme can be achieved. This idea has been examined from other perspectives in the literature. O’Hara (2001) provides an in-depth analysis of how insider trading may possibly be organized in firms as part of the overall compensation package for managers. Bhattacharyya and Nanda (2011) develop a model of trading by an informed fund manager compensated on the basis of her fund's returns, which enforces the manager to invest more on the fund’s assets. Von Thadden (1995) and Cohen et al. (2012) discuss the effects of short and long-term compensation schemes and documents the differences. Inci (2012) finds that managers with shorter tenure rely more on insider profits as part of their compensation. Roulstone (2003) investigates firm imposed insider restrictions and executive compensation. Firms with higher restrictions pay premiums in total compensation compared to firms with lax restrictions. He reports that insider trading plays an implicit role in rewarding and motivating executives. Agency problems make incentive contracts to have a short-term component, particularly when managerial tenure is stochastic. Therefore, there exists a number of ways of rationalizing our assumed form of the compensation function as an optimal compensation scheme. An examination of the corporate charters of randomly selected companies has revealed that at least half of those firms do not mention either insider trading or the misuse of confidential information according to Seyhun (1992a). This evidence is consistent with the interpretation that shareholders do not desire additional restrictions on insider trading. Carlton and Fischel (1983) document that shareholders do not restrict insider trading in the company code of ethics or employment contracts. The majority of corporations does not appear to be concerned with insider trading. These are indications that corporations can utilize insider trading behavior of managers as part of the wage compensation structure.
Using Login Data to Monitor Student Involvement in a Business Simulation Game
Dr. Michael Jijin Zhang, Sacred Heart University, Fairfield, CT
While student involvement in business simulation games is critical to student learning and performance in the games, monitoring student involvement levels remains a challenge facing those who teach strategic management with simulations. This study examined and tested whether student login frequency and consistency may serve as valid proxy measures of student involvement in the game activities, using data collected from 219 undergraduate business students who participated in a business simulation game (Capstone). It was found that student login consistency had a stronger relationship with student involvement than student login frequency did, thereby representing a better measure of student involvement. Research and pedagogical implications from these findings are discussed. Today business simulation games have become a popular and effective tool to teach strategic management (Jennings, 2002; Zantow et al., 2005; Faria et al., 2009). In order for students to perform well in and learn from the simulation, they must be actively involved or engaged in the decision making activities of the simulation (Wolfe & Luethge, 2003). Since students participate in the simulation games with different interests, expectations, motivations, abilities and learning styles, it is necessary for instructors to monitor each student’s involvement in the simulation effectively and efficiently so as to provide timely interventions needed. This is especially important in that most of the business simulation games are played in teams. While monitoring individual students’ engagement in the simulation has traditionally posed a challenge for instructors (Wolfe & Luethge, 2003), today’s business simulation games are often run online and thus make it easier to track student activities in the games through automatic collection of data about how often a student logs into the game to make decisions. The research question the present study sought to address is whether certain student login data represent valid proxy measures of student involvement in the simulation. Specifically, the study explored the potential relationships between two types of student login data (login frequency and login consistency) and student involvement levels in the simulation, using data collected from a sample of students participating in a web-based business simulation game (Capstone). Investigating these relationships may inform us about the potential pedagogical value of using student login data to track student engagement in the simulation activities and improve student performance and learning in the games. The development of two easy-to-obtain proxy measures of student involvement may also facilitate future research on the learning and performance effects of student involvement as well as its potential role in mediating the relationships between certain personal or external factors and student learning or performance. Following the conventional wisdom that participation in educational games increases learning (Randel et al., 1992, Wolfe & Luethge, 2003), student involvement in the decision making activities of business simulation games is presumed to influence student learning of strategic management knowledge and skills from the games. After all, it is hard to imagine a student would learn a lot about strategic management without some experience in contemplating and making strategic decisions. Yet students come to business simulation games with different interests, attitudes, motives, expectations, skills and abilities, and learning styles (Coffey & Anderson, 2006). The individual differences may result in varying levels in individual students’ efforts and learning outcomes even if students perform well in groups (Wolfe & Luethge, 2003). Indeed, the extant literature on the conditions for effective learning in business simulation games has shown that student learning at the individual level is subject to the influence of certain personal traits or characteristics (Faria, 2000; Adobor & Daneshfar, 2006; Coffey & Anderson, 2006; Towler et al., 2008). From a survey of 1967 students who had participated in a popular business simulation game (Capstone), Coffey and Anderson (2006) found that the students with higher achievement motivation perceived greater learning value from the simulation experience. In their study of factors affecting the effective use of another popular business simulation game (Business Strategy Game), Adobor and Daneshfar (2006) reported that students who perceived the simulation as reflective of real life situations obtained higher levels of learning of key strategic management skills. Towler et al. (2008) even found age and gender accounting for some variances in the learning outcomes students obtained from playing the Business Strategy Game. External factors such as team size, team dynamics and instructor support have also been shown to have a bearing on student learning from business simulation games (Wolfe & Chacko, 1983; Washbush & Gosen, 2001; Snow et al., 2002; Anderson, 2005; Adobor & Daneshfar, 2006; Coffey & Anderson, 2006). For example, Wolfe and Chacko (1983) investigated the team size effect on individual learning levels and found the students playing in teams of three to four members had increased their knowledge of strategic management concepts and facts more than those working in teams of smaller sizes. Adobor and Daneshfar (2006) found that task conflict in the team (i.e., the degree of idea exchanges) increased learning of individual team members while emotional conflict in the team reduced individual learning. Besides team size and processes, Coffey and Anderson (2006) observed in their study that students felt more positive about the learning value of the simulation if the instructor was more helpful and knowledgeable. In their investigation of the performance effects of two different manners of introducing a business simulation game to the students, Snow et al. (2002) found that students perceived the simulation experience more useful and effective if the simulation was integrated into the course throughout the term instead of being treated as stand-alone experience.
Cognitive Moral Development – its Relevance to Authentic Leadership and Organizational Citizenship Behavior: A Conceptual Illustration
Brad Nikolic, RMIT University, Melbourne, Australia
This conceptual paper portrays the importance of cognitive moral development (CMD) and the potential influence on authentic leadership and organizational citizenship behavior. The aim is to contribute to the limited number of research on cognitive moral development and authentic leadership. First, we introduce a conceptual model, which illustrates potential linkages between cognitive moral development, authentic leadership and organizational citizenship behavior. We then clarify the construct of CMD, authentic leadership and organizational citizenship behavior. We draw upon extant research findings to demonstrate and discuss linkages between cognitive moral development, authentic leadership, and organizational citizenship behavior with several testable propositions. The emergence of a growing interest in authentic leader behavior and business ethics post 1980s has arisen from a continual stream of instances of dubious corporate behaviors and scandals, resulting in the establishment of underlying doubts in the minds of all layers of contemporary society with corporate practices and leadership decisions (Ferrell, Fraedrich, & Ferrell, 2009; Gardner, Cogliser, Davis, & Dickens, 2011). This has resulted in a loss of confidence and esteem with which corporate leadership has been held and replaced with an instantaneous readiness by all layers of contemporary society to question in the first instance whether there are genuine intentions and integrity in corporate leadership and decision making (Hannah, Avolio, & May, 2011a). Society is now desiring their leaders to act significantly more professionally where their behaviors are in alignment with core values, and where they are actually concerned about peoples’ real needs and their behaviors and decisions address the greater good (Gardner et al., 2011; Leroy, Anseel, Gardner, & Sels, 2012a; McShane & Cunningham, 2011; Peus, Wesche, Streicher, Braun, & Frey, 2012). There are greater demands for more internal/external scrutiny within all levels of business operations (Clapp-Smith, Vogelgesang, & Avey, 2009; Diddams & Chang, 2012). This has given rise to the notion of the need for corporate behaviors to be real and true. In other words, they need to be authentic and not pretend or in any way be misleading or deceptive with either their intentions or actions. Moreover, it is the recognition and alignment of inner values with actions in order to develop a moral framework. In order to achieve authenticity and develop a moral framework, a leader requires an advanced level of moral development (Gardner, Avolio, Luthans, May, & Walumbwa, 2005; Walumbwa, Avolio, Gardner, Wernsing, & Peterson, 2008). Linkages between the construct of cognitive moral development and different leadership styles have been found to have impact on subordinates’ behavior (Jordan, Brown, Trevino, & Finkelstein, 2013; Turner, Barling, Epitropaki, Butcher, & Milner, 2002). Further, it has been revealed that levels of cognitive moral development influence on subordinates’ citizenship behavior (Street, 1995). In particular, research showed that authentic leadership has various benefits for the personal within the organization as well as the organization itself, including enhanced levels of organizational citizenship behaviors (Gardner et al., 2011). Therefore, the aim of this conceptual paper is to illustrate the benefit of the construct of cognitive moral development to authentic leadership and organizational citizenship behavior. Likewise, the aim is to contribute to the growing body of knowledge on cognitive moral development and authentic leadership. This conceptual paper, as demonstrated in Figure 1, identifies that cognitive moral development has a direct influence on followers’ perception of authentic leadership and organizational citizenship behavior. Further, this paper proposes a direct relationship between authentic leadership and organizational citizenship behavior. Additionally, the paper proposes that the relationship between cognitive moral development and organizational citizenship behavior will be mediated by authentic leadership. In the following section, we begin by clarifying the concept of cognitive moral development (CMD), followed by authentic leadership and organizational citizenship behavior. Then, we discuss the relevance of CMD to authentic leadership and organizational citizenship behavior. We then discuss the impact of authentic leadership on organizational citizenship behavior. Additionally, we argue the benefit of CMD to organizational citizenship behavior. Finally, we argue the potential relationship between cognitive moral development, authentic leadership and organizational citizenship behavior. The theory on cognitive moral development by Kohlberg (1969) describes a maturation process of cognitive capabilities that an individual applies when reasoning about ethical matters (Crain, 2011; Kohlberg, 1984; Trevino, 1992). Kohlberg’s construct has been established as a dominant construct within the domain of business research, which is applied in empirical research to investigate decision-making processes of individuals and how moral reasoning of individuals could be better understood (Kish-Gephart, Harrison, & Treviño, 2010; Krebs & Denton, 2005; Rest, Narvaez, Bebeau, & Thoma, 1999). Notably, the construct of cognitive moral development has been found to have impact on ethical decision-making processes within an organizational context (Ashkanasy, Windsor, & Trevino, 2006; Chang & Yen, 2007; Dukerich, Nichols, Elm, & Vollrath, 1990; Fraedrich, Thorne, & Ferrell, 1994; Kish-Gephart et al., 2010; Trevino, 1992).
Investigation of Buyer-Supplier Trust, Behavior and Performance
Dr. Hui-chuan Chen, University of Tennessee at Martin, Martin, TN
Dr. Taeuk Kang, University of Tennessee at Martin, Martin, TN
As a relationship becomes tighter, suppliers often seek more stable processes to improve efficiency in meeting buyer’s demands. An important element in reaching supply chain efficiency involves establishing and developing trust across the organizational boundaries via alliances between buyers and suppliers in a supply chain. When the buying firms recognize that a certain supplier can reduce transaction costs, this recognition will encourage the buyer to invest more in the buyer-supplier relationship and yield a more cooperative relationship. A stable relationship between supplier and buyer has been recognized as being fundamental for firms to have successful performance. During the 1980’s, research studies mainly focused on buyer-supplier relationships in operational and integration-based performance. In the 1990 to 2000 timeframe, capability-bases and financial performance became major areas of research for supply chain management. Also, buyer-supplier mutual effects on buyer practices were often studied (Terpend et al., 2008). How well the supply chain functioned as a whole depended on the success of the individual firms to achieve customer satisfaction and loyalty. Over the years, some firms have endeavored to secure these competitive forces by reducing costs and refocusing on their core competencies in an effort to downsize their workforces. The markets in which firms participate are often affected by rapid technological change, customer demands, and shorter product life cycles. Therefore, companies have increased outsourcing of purchasing goods and services that were formerly produced in-house. When firms focus on core competencies, the result is that downsizing and outsourcing frequently require suppliers to provide timely delivery of quality products at competitive prices. Consequently, the buying firms increase their dependence on suppliers so that buyers will eventually manage and develop supply chains in the areas of delivery, quality, new technology adoption, products design, cost reduction and financial health of their suppliers (Krause et al., 1998). The stable relationship between a supplier and a buyer has been recognized as being fundamental for both firms to have successful performance. In such relationships, a supplier achieves stable financial performance via buyer’s performance, while a buyer utilizes its resources by depending on supplier ability. However, as the relationship becomes tighter, the supplier would try to have more stable and efficient processes in order to meet buyer’s demands. If the relationship is weak, the supplier might not provide adequate resources and stable processes. In such a situation, suppliers might reduce flexibility thereby increasing risk in the process. Thus, suppliers modify their overall process for either flexibility or efficiency. Hence, an important element in reaching supply chain efficiency requires establishing and developing trust across the organizational boundaries, as seen in relationships including alliances between buyers and suppliers in a supply chain. Empirical researchers have studied trust in buyer-supplier alliances that impact the performance of inter-organizational behaviors. Therefore, studies often investigate the potential inter-organizational behaviors within these relationships and whether such trust behaviors affect performance outcomes (Johnston et al., 2004). Furthermore, the buyer-supplier relationship would need to focus on supply chain management supporting the concept that trust, commitment, and long-term cooperation are significant elements of effective buyer-supplier relationships and outcome performance (Cannon et al., 2010; Johnston et al., 2004; Trautmann et al., 2009). When the buying firms realize that an ideal supplier can reduce transaction costs, this recognition will encourage the buying firms to invest more in the buyer-supplier relationship and yield a more cooperative relationship. The coordination might be taken in the form of personnel and information exchange or the possibility of capital investment (Carr and Pearson, 1999). In the following section, we provide the literature review, propositions, suggestions for research, and discussion. Many buying firms seek to improve the flow of processes and reduce material requirements by working with a smaller number of suppliers and delegating product design and production coordination to the suppliers. Buyer firms recognize that upstream supplier’s operations can affect the buyers’ downstream customers (Wu et al., 2010; Youngdahl et al., 2008). A lack of trust among buyers and suppliers frequently results in ineffective and inefficient performance as transaction costs increase due to verification, inspections and certification of the trading partners. As such, every transaction must be scrutinized and verified. Thus, scholars have determined that trust has a significant effect on lowering transaction costs. Trust is commonly defined as a willingness to take risk. When one party has assurance in interchange of another party’s reliability and honesty, then trust results (Morgan and Hunt, 1994). Several types of trust exist, i.e., referring trust at the contractual, goodwill and competency levels; moreover, trust is inherent in communication, informal agreements, and undertaking co-ordination (Johnston et al., 2004). The benefits of trust are observed when the partners believe that each other will perform actions to produce a positive outcomes for both parties, as well as avoiding unexpected actions that generate negative consequences (Anderson and Narus, 1990). Kwon and Suh (2004) noted that the literature regularly mentions an association between trust and commitment, but there is still a lack of empirical testing of such associations from the supply perspective. When a high level of trust and a strong commitment are established among supply chain partners, an effective supply chain performance requires sharing information. If the partners’ trust is broken, productivity will be lost and efficiency and effectiveness will be compromised. Kwon and Suh (2004) further mentioned that the relationships of many strategic alliances have failed due to a lack of trust among supply chain partners. Trust has received considerable attention in the buyer-supplier relationship as an operational construct. Additionally, trust has been consistently stated as a predictor of cooperative behavior in interpersonal relationships and negotiation theory (Ring and Van de Ven, 1994). Nevertheless, trust is not a required input to cooperation. Ideally, trust is both a pre-condition and an outcome which should be developed to form the relationship. This relationship is built on frequent face-to-fact contact, as well as the sharing and exchanging of essential information with continuing commitment to both parties (Johnston et al., 2004). Information sharing and exchanging may require the release of protected financial, strategic and other operating information to partners. However, the effective information exchange greatly depends on initial trust within the company and eventually extends to supply chain partners (Bowersox et al., 2000). If one of the partners is not prepared to share the available information, the value of trust declines greatly. When both commitment and trust are present in both partners, this trust can produce outcomes that support efficiency, effectiveness and productivity. Hence, trust is observed as a basis of an ideal strategic alliance. Finally, some studies reveal that if supply chain companions share information openly and generate a long-term view the relationship, a decrease in opportunistic conduct results (Kwon and Suh, 2004).
Massive Open Online Courses (MOOCs): Theoretical and Practical Considerations for Knowledge Management
Dr. Martin Grossman, Professor, Bridgewater State University, Bridgewater, MA
The Massive Open Online Course (MOOC) has emerged as a potentially disruptive force in higher education. Today there are hundreds of MOOCs available and millions of individuals around the world participating in these open and unlimited web-based courses. Much of the discourse surrounding the MOOC phenomenon questions its efficacy in replacing or supplementing traditional face-to-face formats in colleges and universities. Detractors point to such issues as low retention rates, difficulties in classroom management and dubious methods of grading and assessment. While these might be legitimate concerns, the discussion often loses sight of the more far-reaching impacts that such open environments might provide for knowledge creation and learning in more generalized populations. Attention needs to be paid to the potential that MOOCs might play in non-academic environments (i.e. personal, corporate and global) for knowledge creation and dissemination. This article traces the evolution of the MOOC, explores its theoretical foundations and stresses its potential as a practical knowledge management tool. It is argued that MOOCs may have a more far-reaching role to play and that the true value of MOOCs has yet to be realized. Perhaps the most hyped educational technology to have emerged in recent years is the Massive Open Online Course (MOOC), simply defined as ‘a course of study made available over the Internet without charge to a very large number of people’ (www.oxforddictionaries.com). MOOCs incorporate traditional materials (e.g. videos, reading assignments, problem sets) as well as interactive elements for building communities of professors, students and teaching assistants. As a pedagogical innovation, the MOOC has become a topic of much discussion in U. S. educational circles, being touted as a disruptive force and radical game changer. MOOCs have been embraced by leading institutions such as MIT, Harvard and Stanford and venture capitalists alike. Commercial MOOC start-ups, such as Coursera and Udacity, feature professors from top universities and deliver courses with tens of thousands of students. The MOOC has become a frequent topic in the popular press (Papano, 2012) as well as a growing area for academic research (Liyanagunawardena et al., 2013). In spite of the enthusiasm and hyperbole around this new technology, there are some obvious concerns. Among the commonly cited criticisms about commercially available MOOCs are their low completion rate, their undetermined efficacy in fostering deep learning and authentication of certificates and grading systems. MOOCs are still an immature technology and many of the issues pertaining to their place within the educational system will take time to resolve. This should not obscure their current potential as powerful tools to generate and share knowledge in a variety of other contexts. This paper traces the evolution of the MOOC concept in some detail, exploring the underlying theoretical concepts as they apply to knowledge management and learning theory. Further, it explores how MOOC technology may be practically applied in non-academic settings i.e. for personal, corporate and global knowledge management. The notion of distance education has a long history which can be traced back to early attempts in the pre-Internet era, utilizing such distribution methods as mail and radio. The MOOC idea emanates from the more recent concept of open educational resources (OER), a movement which became popular in the early 1990s. Fundamental to OER is the breaking down of educational content into ‘learning objects’ that can be easily assembled and reused in a number of different formats and learning contexts (Wiley, 2006). Such chunks of instructional material (e.g. video clips, courseware, exercises, etc.) are delivered over the Internet independent of time and place constraints. Furthermore, the platform allows a wide range of individuals to access the materials and to remix them to meet individual needs. The global OER movement gained significant momentum with the emergence of MIT’s OpenCourseWare project, which placed its entire course catalog online in 2002 (Guttenplan, 2010). Over the next five years a number of similar initiatives launched by international organizations such as UNESCO, OECD, and the Open Society Institute, emphasized the need for governments and publishers to make educational materials available to the public via the Internet at no charge, further propelling the OER movement. Today there are many open educational resources available (e.g. full courses, modules, open textbooks, videos, software, etc.) and a multitude of programs worldwide promoting their distribution (e.g. OER Commons from the Institute for the Study of Knowledge Management - ISKME (http://www.oercommons.org/), Connexions from Rice University (http://cnx.org/) and the National Council of Educational Research and Training (http://www.ncert.nic.in /index.html). The genesis of the MOOC phenomenon can be attributed to the works of a handful of Canadian academics involved in educational technology and learning theory. In 2008, George Siemens and Stephen Downes offered a large online course called Connectivism and Connective Knowledge (CCK08), which both presented and adhered to the philosophical principles of connectivism. The course was fully open and could be followed online for free. The essential principle in this environment is that learning takes place by connecting and building relevant networks to construct knowledge. The term MOOC, coined around the same time as the course’s debut, reflects the underlying philosophy behind this experiment, Massive referring primarily to the number of students and the scope of the course’s activities and Open referring to everything from the open source software, the registration processes and the courseware itself. After the unveiling of CCK08, a number of other early entrants offered similar courses (later to become known as cMOOCs), emphasizing openness, loosely structured formats (e.g. blogs and social media) and a ‘connectivist’ approach in which the teacher was more a ‘guide on the side’ as opposed to a ‘sage on the stage’. The following years saw a major change in the MOOC scene, as three of the most prestigious universities in the U.S., MIT, Harvard and Stanford, got involved. In 2011, Stanford introduced a number of MOOCs on Artificial Intelligence and Machine Learning. This effort ultimately led to the development of two start-up companies, Coursera and Udacity, by the professors teaching these courses. Shortly thereafter, MIT teamed up with Harvard to start the non-profit MOOC platform, edX. By 2012, deemed the ‘year of the MOOC’ by the N.Y.
China’s One-Child Policy: Has it Lived Beyond its Intended Mission?
Dr. Prema Nakra, Professor, Marist College, Poughkeepsie, NY
Johanna M. Korby, Mental Health Counselor, Poughkeepsie, NY
Concerns about global population growth have been debated among academics, economists, and politicians for over 30 years. Issues related to uncontrolled population growth include its impact on natural resource depletion, poverty and social unrest, and the possibility of eventual extinction (Bandarage, 2006). Environmental and scientific organizations including United Nations agencies have issued warnings of increased social problems and irreversible environmental degradation if current population growth rates continue. While debate regarding the ill effects of overcrowding on the planet earth has continued, the People’s Republic of China (China) took matters in its own hands by introducing a population control policy over 30 years ago. This article provides an overview of China’s “one-child” policy and highlights its unintended consequences and long term implications for the country and the world. At the inception of the communist Beijing regime in 1949, China was flourishing. China’s attitude of "more children, more affluence" led to a baby-boom and consequent shortages in food supplies, housing, medical services, and educational facilities. By 1979, China was home to a quarter of the world’s population, approximately two thirds of which was under the age of 30, and China’s baby boomers of the 1950s and 1960s were beginning to enter their reproductive years. Officially presented in 1980 as a voluntary-based birth control program, the State mobilized a family planning policy advocating that each couple have only one child, with exceptions requiring prior government approval (Settles et al., 2008). Although leadership’s original goal was to bring China’s population growth in to balance with social and economic development, resources, and the environment (Richards, 1996: Vogel, 2011), the policy has remained in place for 32 years. In 2002, China formally established its family planning policy as a stable, long term approach to population control through publication of its Population and Family Planning Law. As per this policy, the State sanctions only one child per couple. Implementation of this policy involves strict administrative controls regarding residential registration, certificate of birth approval, and birth certification. Financial and material incentives, community-based contraceptive delivery services, planned parenting information, educational and motivational activities including free abortions and sterilizations, continue to be provided. In rural villages, where 53% of China’s population lives, the Chinese government’s one-child policy requires examining women for use of contraceptive rings, pregnancy, and illness four times a year. More than 300,000 officials are employed to enforce the policy, and they receive financial incentives to meet abortion and sterilization quotas (Cao, Tian, Qi, Ma & Wang, 2011, Hesketh, Wang, 2011). The policy is most strictly enforced among urban residents and government employees, with exceptions allowing for a second child if the first is disabled or if both parents are only children or work in high risk jobs. Families living in rural areas are allowed a second child five years after the first if the first is a female child. A third child is sometimes allowed in more remote, under-populated areas and among ethnic minorities. As of 2007, 19 of China’s 31 provinces allowed rural residents to have a second child if the first child is a girl (Reid, 2009). When the policy was introduced, the government set a target population of 1.2 billion by the year 2000. The census of 2000 recorded a population of 1.27 billion. In March 2006, the head of China’s National Population and Family Planning Commission announced that China’s family planning policy had helped prevent 400 million births since its inception. Within a short period of time, China’s Total Fertility Rate (TFR) pattern had come to match those of more developed countries. China’s live birth ratio has decreased from 2.0 per woman to 1.6 children per woman. Today, China has one of the lowest fertility rates in the world as reflected in Exhibit 1 below (Boer and Hudson, 2008; Anonymous, 2011): Since 1979, China has made significant improvements in living standards, maintained strong economic growth, and initiated market reforms. With a population control policy in place, China focused on economic transformation. During past 30 years China rebuilt its economy by opening its industrial sector to foreign investment, privatizing state-owned industries, and expanding higher education. As a result, China experienced an average growth rate of 8.69% per capita GDP from 1978-2008, as compared to 1.90% in the United States, 1.84% in Europe, 2.07% in Japan, and 3.82% in India. Today, China is the world’s second largest economy in terms of GDP with a consistent growth rate of 10% per year and, according to World Investment Report (2010), is set to assume the number one economic position in the world by the year 2020 (Wei & Hao, 2010). Implementation of China’s one-child policy has not occurred without negative outcomes. Serious societal problems resulting from this policy discussed in this paper include: sub-replacement fertility rates, aging population, gender imbalance, increase in infanticide, sex-selective abortions, human trafficking, abductions, and widespread mental health and other social problems. Fertility rates represent the average number of children born per woman if all women live to the end of their childbearing years and bear children according to a given rate at each age. Global fertility rates are in decline and populations in industrialized countries, especially in Western Europe, and are projected to decrease dramatically over the next 50 years. Fertility rates and sex ratios in ten economies as measured by Gross Domestic Product (PPP) are illustrated in Exhibit 2 below. A Stable population replacement rate in any society requires an average of two children per woman. According to the Census Bureau, China’s total fertility rate is about 1.5 or 1.6 children per woman, or 30 percent below the level needed to maintain long term population stability. China also has one of the world’s lowest “dependency ratios,” with roughly three economically active adults for each dependent child or old person (Eberstandt, 2010; Neim, 2011). As fertility rates declined and life expectancy increased, China began to see a demographic shift in favor of an aging population. In 2010, the population aged 60 years and above accounted for 8.8% of the total population. By 2035, approximately 25% of population will be over the age of 60. An aging population will place heavier pressure on the country’s health care system. Additionally, since Chinese families depend on children to take care of their needs during old age, single children will find it challenging to care for elderly parents (Righter, 2009; Hudson and Boer 2009). Economists project that the working population (aged 15-29) in China will decrease by approximately 100 million people over the next 20 years, and will continue to decline by 10 million annually after 2025. In the next decade alone, the number of young people between ages 20 -24 will drop by one-fourth (Roberts, 2006, Eberstadt, 2010). Since China’s business model is based on an abundant supply of labor, China will face significant skilled labor shortages in years to come due to its aging of population.
What is the Sustainability Content of Marketing Textbooks?
Dr. Constance Bates, Florida International University, Miami, FL
Dr. Deanne Butchey, Florida International University, Miami, FL
Even though students seem to know what sustainability means in general, we might ask how much students are learning about sustainability in college. This article examines marketing textbooks to see how much space is devoted to sustainability. The study design is to conduct a content analysis of 8 principles of marketing texts. Results are presented for the total number of sustainable words used, the average words per page, as well as sustainable words in the table of contents and index. The results show a current snapshot of the space devoted to green topics in a required, core text. This is a proxy to determining how well sustainability topics are being integrated into the marketing curriculum. Most business students are familiar with the concepts of green and sustainability. They learned about protecting the environment in elementary school. Most are concerned about the environment, recycle and want to have more sustainability content in their business classes (Silverblatt, Bates, Kleban 2012). The question is: how are business professors responding? Are they adding green content to their courses? Are they using green cases? Have our business students been exposed to the issues of managing sustainable projects? This paper focuses on marketing in particular as many businesses have identified this area as one with many opportunities to go green: product design, packaging, and shipping, just to name a few. Morelli (2011) defines environmental sustainability as “meeting the resource and services needs of current and future generations without compromising the health of the ecosystems that provide them, ...and more specifically, as a condition of balance, resilience, and interconnectedness that allows human society to satisfy its needs while neither exceeding the capacity of its supporting ecosystems to continue to regenerate the services necessary to meet those needs nor by our actions diminishing biological diversity.” Hult (2011) notes that this concept of sustainability is growing in importance among scholars, managers and policymakers and believes that the field of marketing “is in a unique position to elevate its focus from managing relationships with customers to strategically managing a broader set of marketplace issues”, since a successful organization must align itself with the increasing interests of its customers in the area of social responsibility issues including those pertaining to the environment. Kotler (2011) observes that increasingly marketers are recognizing the need to revise their policies on product development, pricing, distribution, and branding because they realize that resources are not limitless and because there is a potential for detrimental environmental impact from these products. He notes that “companies must balance more carefully their growth goals with the need to pursue sustainability.” Cooper, Parkes & Blewitt, (2014) and Sharma & Hart (2014) observe that these evolving consumer concerns have not been lost on business schools. AACSB has added learning objectives to its standards that reflect contemporary thinking in business ethics, corporate social responsibility, and sustainability which has led to curriculum realignment. Couple that with the fact that many progressive universities and schools/colleges of business have as part of their mission the need to become shapers of the values of society (and a place for debates about these values) with the eventual goals of ensuring sustainable development. Somers, Passerini, Parhankangas & Casal (2014) use mind maps to study how business school students and faculty organize and apply general business knowledge. They note that undergraduate business students organize knowledge by business discipline and MBA students focus on strategic management at the expense of other disciplines. Only faculty demonstrated the ability to engage in integrative thinking. However while engaging in multidisciplinary activities may come naturally, because faculty are encouraged to publish in disciplinary silos it becomes very difficult to take a multidisciplinary approach to curriculum development. These studies provide the motivation for investigation as to where undergraduate students should be introduced to the issues of environmental sustainability. Bates and Silverblatt (2014) investigated this question in the Management discipline and this paper replicates the methodology to answer the question if Marketing is a more appropriate vehicle for this introduction. New textbooks should incorporate cutting-edge areas of research. Chabowski, Mena & Gonzalez-Padron (2011) note that even though it is a well-known fact that changes in the business environment have prompted scholars to investigate issues in sustainability there are not many outlets in highly regarded journals. They find that topics most often investigated in the period 1958-2008 include citizenship behavior, stakeholder theory, corporate performance, and the triple bottom line. The results indicate important areas specific to the marketing context include “external-internal focus, social-environmental emphasis, legal-ethical-discretionary intent, marketing assets, and financial performance”. This paper specifically focuses on an area that is most likely to be cross-disciplinary, environmental sustainability. Textbooks are the core of a business course, forming the basis of the syllabus and providing the source of most of the material students cover in a course. Textbooks were assessed to determine how much material was devoted to sustainability. The amount of material on green subjects was used as a proxy for how green the marketing courses were. The amount of material in marketing textbooks devoted to sustainability was determined using the process of content analysis. This process calls for an assessment of the content of a document. The assessment is to count the number of words in the book devoted to a topic. The higher the count, the greater is the percentage of material devoted to a topic. The process was to count the number of green words in 8 marketing textbooks. Following the study done by Bates, Silverblatt (2014), ten words were used. These words were the result of paring down a list of over 30 words. As Bates, Silverblatt found, “After reviewing several texts, it became apparent that our list contained too many words which either were not present in the texts or represented repetitions of sustainability concepts” (Bates, Silverblatt 2014).
The Adoption of an Electronic Medical Record System in a Medicare Certified Home Health Agency
Dr. Joy May, University of Phoenix, AZ
A qualitative grounded theory study was conducted to examine the experiences of clinicians in the adoption of electronic medical records (EMRs) in a Medicare-certified home health agency. An additional goal for this study was to triangulate qualitative research in describing, explaining, and exploring technology acceptance. The experiences were studied through an anonymous survey using a third-party vendor. The data revealed that in spite of Internet and connectivity issues, clinicians at XYZ Home Care (a pseudonym) overlooked these issues because of the benefits in utilizing an EMR system. These benefits include quick access to patient medical records and saving time. To ensure the privacy of health information, the Health Insurance Portability and Accountability Act (HIPAA) was signed into law in 1996 by President Bill Clinton. One mandate of HIPAA is that all medical facilities must move from a paper medical record system to an electronic medical record (EMR) system. According to the U.S. Department of Health and Human Services (USDHHS, 2003), HIPAA consists of both a Privacy Rule and a Security Rule that apply to all healthcare providers that transmit health information electronically. The paper system has many limitations involving quality care. According to Parente (2009), the problem is that data are not organized or centered on the patient; instead, data from a patient exist in many different settings. Thus, an EMR is integrated and patient-centered. The Obama administration is so sure that EMRs will improve quality that, as part of the American Recovery and Reinvestment Act (2009), funds have been made available to healthcare providers if they meet specific qualifications and protocols (Jones & Kessler, 2010). According to Jones and Kessler (2010), both curiosity and interest in EMRs are driven by the main benefit, which is quick access to patient medical information. Taking EMRs into consideration, this research was a qualitative study using a grounded theory approach to examine the lived experiences of clinicians in the adoption of EMRs at XYZ Home Care (a pseudonym). The clinicians consisted of registered nurses (RNs), licensed vocational nurses, physical therapists, physical therapist assistants, occupational therapists, licensed clinical social workers, medical social workers, medical social worker assistants, and certified home health aides. The data were collected using an anonymous survey consisting of 22 open-ended questions through SurveyMonkey, a third-party vendor. Regardless of a home health agency’s (HHA) size and case mix, the mandate to move to an electronic system is inevitable (HIPAA, 1996). The problem is that when moving from paper medical records to EMRs, HHAs must not only adopt this federal regulation but also determine the most user-friendly and advanced and secure technology for making the change. According to a study by Hennington, Janz, Amis, and Nichols (2009), the degree of adoption is associated with the degree of the government mandate (USDHHS, 2011). Examining the lived experiences of clinicians will be valuable to ease future adoption. Thus, adoption theory (Calantone, Griffith, & Yalcinkaya, 2006) was the theoretical foundation for this study. After careful review of the literature on the adoption of information technology within the medical field, a gap in the literature was discovered. That is to say, no studies have been done in the area of adoption of EMRs related to Medicare-certified HHAs. Although the technology acceptance model (TAM; Davis, 1989) was originally proposed as a quantitative model, its theoretical parameters were applicable to this qualitative, grounded theory study. In previous research, the TAM has been utilized quantitatively to describe and explain technology acceptance (Escobar-Rodríguez, Monge-Lozano, & Romero-Alonso, 2012; Haenlein & Kaplan, 2011; Haghighat & Mohammadi, 2012; Holden & Rada, 2011; Peslak, Ceccucci, & Sendall, 2010; Sommer, 2011). Through this qualitative grounded theory study, the researcher explored technology acceptance qualitatively. The primary rationale behind the study was to fundamentally understand the lived experiences of clinicians in the adoption of an EMR system. Between the emergence of technology and rigorous documentation requirements of skilled care, homecare clinicians face a major change in handwritten evaluations and visit progress notes. As a result of the federal mandate, homecare clinicians are challenged to document clinical care as well as to use new and advanced technology, such as EMR software. The literature review focused on the most influential theory, TAM (Davis, 1989), which identifies factors in the adoption of technology. It is a well-researched model and is an extension of others. As a result of the failure of many organizations to adopt technology, Fred Davis, a student at the Massachusetts Institute of Technology Sloan School of Management, proposed the TAM in his dissertation Two major variables are at the heart of the TAM. Perceived usefulness refers to the degree of ease an individual believes the information system will positively assist him or her in performing his or her current job (Davis, 1989; Davis, Bagozzi, & Warshaw, 1989). Perceived ease of use refers to “the degree to which a person believes that using a particular system would be free of effort” (Davis, 1989, p. 320). According to Calantone et al. (2006), perceptions about usefulness and the use of adoption of technologies influence attitudes, which influence behavioral intentions, which influence system use. As stated earlier, although the TAM was originally proposed as a quantitative model, its theoretical parameters were applicable to this qualitative study.
Job Satisfaction Post-Downsizing
Dr. Rosaland D. Lewis, University of the Rockies, Colorado Springs, CO
Dr. Yvonne V. Phelps, University of Phoenix, Phoenix, AZ
This quantitative research study examined the impact of downsizing on survivors regarding job satisfaction, specifically examining the relationship between the number of downsizing events experienced and level of job satisfaction. A survey was conducted of 171 managers, supervisors, analysts, cashiers, and factory workers who had experienced downsizing at least one time; the validated survey tool used was the quantitative short-form Minnesota Satisfaction Questionnaire. The data analysis of the study used descriptive analysis, t test, and one-way analysis of variance. No significant differences in job satisfaction among survivors after experiencing downsizing one time or multiple times was found. No influential factors such as age or gender contributed to the data results. Prior to the early 1990s, restructuring was not a popular topic among researchers. Between 1980 and 1994, almost every medium and large corporation had gone through some kind of restructuring or downsizing process. Yet, downsizing remained one of the most pervasive and understudied phenomena in the business world (Cascio & Wynn, 2004). The lack of empirical observations and research related to the causes, effects, consequences, and dynamics of restructuring meant that in the 1980s, many organizations managed downsizing with an informal approach. Restructuring became a favorite tool for chief executive officers and managers to utilize under pressure, to improve short-term profits and boost the price of their company stock (Cascio & Wynn, 2004). Gilbert (2001) suggested that organizations should consider employees’ behaviors, attitudes, and perceptions when implementing long- or short-term restructuring goals because job performance may be affected. Employees who survive downsizing based on prior experience with downsizing are influenced differently and therefore may respond differently to their perception of job satisfaction. Cascio and Wynn (2004) stated that surviving employees want to feel important to the organization in order to maintain a positive attitude. When management does not communicate with employees regarding the direction of the organization, survivors might experience low morale, spirits, and confidence after downsizing (Amundson, Borgen, Jordan, & Erlebach, 2004). Herscovitch and Meyer (2002) suggested that organizations should recognize that significant value comes from open and candid communication, which is necessary to be successful in a changing environment. There are short-term consequences for employees in the organization related to short-term rewards from restructuring for both the organization and management. Feldheim (2007) stated that by the turn of the 21st century, restructuring had become a word that was alarming, causing fear and insecurity, while generating anger, bitterness, and anxiety among most people. The disparity of outcomes has led to the questioning of job satisfaction among the surviving employees once downsizing or restructuring has taken place. Cascio and Wynn (2004) suggested that effective communication with survivors prior to implementation of changes could help reduce stress and uncertainty, which can have a direct effect on employees’ productivity. Therefore, job satisfaction may be connected to survivors after downsizing in an organization. Current researchers have examined the balance of personal and work life among practical information technology employees during organizational change. Researchers have explored employee age and gender to determine any differences among employees’ perceptions regarding change. Current researchers agree that more flexibility in the workplace could help reduce stress and tension after changes (Ahsan, Abdullah, Fie, & Alam, 2009; Beehr, Bowling, & Bennett, 2010; Goudge, 2006). The purpose of their research was to gain knowledge and information to enhance future procedures related to job satisfaction, age, and gender among survivors after downsizing. Ahsan et al. (2009) focused on developing a relationship between job tension and job fulfillment among staff in Malaysia. The researchers looked at gender, age, and job satisfaction after organizational change. The data gathered by Ahsan et al. (2009) suggested that employees require organizational support to reduce stress and minimize change fatigue during downsizing events; this could help organizations improve job satisfaction among both genders and all age groups. Few studies have addressed the relationship between job satisfaction, surviving employees, gender, and age in connection to downsizing. Chan and Stevens (2001) reported that older employees are less likely to be satisfied after downsizing occurs. Other research, performed by Banuelos (2008), focused on how men and women handled downsizing differently. Bond, Punnett, Pyle, Cazeca, and Cooperman (2004) reported that men felt less productive and experienced an identity crisis after losing their jobs due to downsizing. These studies evaluated neither job satisfaction after several downsizing attempts nor whether men and women handle the situation differently based on their age. This research gap is significant because survivors of previous downsizing may experience different levels of job satisfaction than survivors of downsizing one time with their current employer. Neither Chan and Stevens nor Banuelos evaluated whether gender or age relates to job satisfaction of downsizing survivors. This research gap is also significant because survivors were not addressed, only the dismissed employees, who were men over the age of 50 (Chan & Stevens, 2001; McDaniel, 2003). In addition, no studies have investigated whether survivors who have experienced multiple downsizing events at a single employer are satisfied with their jobs. Several studies have explored other areas of job satisfaction in addition to restructuring and downsizing (Ahsan, 2009; Cascio & Wynn, 2004; Chen & Nath, 2008). Chen and Nath (2008), for example, explored the effects related to restructuring, utilizing a team of information technology experts who focused on beliefs, norms, and principles.
Decreasing Population but Growing Pet Adoption in Mexico. A Field Study
Dr. Hyun Sook Lee, National Autonomous University of Mexico, Mexico City
Since the year 1995, the Mexican population has decreased, similar to the phenomena of other advanced countries. However, pet population has been increasing, resulting in a fast growing pet products/services industry in Mexico. The author conducted 219 surveys to understand related social phenomena. Based on 4 hypotheses testing, it is concluded that Mexicans were adopting pets to provide company and to share affections rather than for the elderly, the solitary and patients. Not all pet owners were affording their own pet care costs, as half of them had cooperation with related expenses by other family members as a part of Mexican culture. By the way, members of larger families tended to adopt pets in Mexico, a result which is something different from that expected for members of smaller families. Similar to the trend of most advanced countries, there has been an important decrease in the birth rate in Mexico during the recent 40 years (2013, La planificación familiar …). It is perhaps surprising that increasing numbers of Mexicans take care of their pets, such as dogs principally, year after year, including during the recent world financial crisis 2007 – 2013. In searching for possible social reasons for this observation, the author intends to understand related phenomena by consulting secondary sources for the types of pets, population trends in Mexico as well as primary sources supported by a field survey. While the developing world faces a raídly growing population, the industrialized world´s population is in decline and rapidly aging. Birthrates in Western Europe and some advanced countries have been decreasing since the early or mid-1960s; more women are choosing careers instead of children, and many working couples are electing to remain childless. As a result of these and other contemporary factors, population growth in many countries has dropped below the rate necessary to maintain present population levels (Cateora & Graham, 2002, p. 71; Marín, 1997, pp. 1207 - 1212; Núñez S. et al., 2004, p. 75). By the way, the life expectation of the human population has increased and aging people must confront social problems, decline of their labour activity, economic problems, etc. (Buendía, 1994, pp. 68 - 69; Núñez S. et al., 2004, p. 76) as well as to diverse changes in their lifestyle that create a strong feeling of solitude and less security (Agostini & Kiguel, 1998, pp. 8 - 12; Nuñez S. et al., 2004, p. 79). During 1930 – 1940, the Mexican population increased 1.76%, and during 1950 – 1970 increased 3.76% which was a historical record. If this trend continued, the experts estimated that the Mexican population in 2010 would reach at least 130 million inhabitants (Arellano, 2010). However, table 1 shows a notable population decrease since the period of 1995 – 2000, perhaps the successful result of an aggressive family planning campaign applied by Mexican Government. There are a diversity of animals to consider as pets if they are a species with an attractive appearance for the human eyes, with an affectionate behaviour (Los animales …, 2013), including snakes. For example, a particular Mexican bred a snake as his pet, but had to donate it to a regional zoo, due to the fact that his pet grew to eat several chickens a day and he was not able to afford it during financial crisis in 2011. In the meanwhile, during the last decades a considerable amount of research has been conducted on the relationship between pet ownership and (human) health in general populations, as well as in older populations (Rijken & van Beek, 2011, p. 373). Many studies have reported benefits of pet ownership in relationship to health (Edney 1995, pp. 704-708; Rijken & van Beek, 2011, p 374). Health benefits include a reduced risk of cardiovascular disease (Anderson et al.,1992, pp. 298-301; Allen et al. 2002, pp. 727-739; Rijken & van Beek, 2011, p 374), better survival rates after a heart attack (Friedmann & Thomas 1995, pp. 1213-1217; Rijken & van Beek, 2011, p 374), lower use of general practitioner services (Headey 1999, pp. 233-243; Headey & Grabka 2007, pp. 297-311; Rijken & van Beek, 2011, p 374), lower feelings of loneliness and depression (Garrity et al. 1989, pp. 35-44), and a higher psychological and physical well-being of community-dwelling elderly (Raina et al. 1999, pp. 323-329; Rijken & van Beek, 2011, p 374).
Testing the Validity of the Greenblatt’s Magic Formula: Evidence from Thailand, Japan and US Stock Markets
Dr. Lalita Hongratanawong, University of the Thai Chamber of Commerce, Bangkok, Thailand
Yosuke Kakinuma, Mahidol University International College, Bangkok, Thailand
This study examines the validity of Greenblatt’s “Magic Formula,” a simple stock selection model, across Thailand, Japan and the US stock markets. The steps to screen out the stocks for the magic portfolio were adopted from Greenblatt’s “The Little Book That Still Beats the Market,” and the magic formula’s returns were compared to the market’s returns. SET index is chosen for the Thailand stock market, Nikkei 225 Index is chosen for the Japan stock market and S&P 500 is chosen for the US stock market. Sharpe ratio analysis is used to measure risk-adjusted return. Overall, this study suggests that the magic formula’s two factors, return on capital and earning yield, were able to produce a higher risk-adjusted return than the market index from 1993 to 2012 for all markets. Thailand’s portfolio generates 34.1%, the annualized average return, comparing with 9.6%, that of the market during the tested period. In addition, Japan’s portfolio generates 9.3% comparing with 1.7% of the market and US portfolio generates 15.5% comparing with 8.8% of the market. Value investing has been the fundamental investment principle for many investors. Graham (1949), the founder of value investing, emphasizes its importance in his classic investment analytic book “The Intelligent Investor.” Graham asserts that margin of safety is the key for successful investment. In other words, the wider the gap between the purchasing price investors pay for a security and the firm’s intrinsic value is, the better the returns investors are likely to get. This “buy low, sell high” approach is a central investment concept for those who invest based on value investing principle such as the world-famous investor Warren Buffett. Although Graham’s concept of the margin of safety is theoretically comprehensible, actually finding a firm’s intrinsic value is extremely difficult and it associates with great uncertainty even for professional investors such as mutual fund managers. Using a discounted cash flow method to find the firm’s value is a prime notion in the school of Finance. Nonetheless, estimating the firm’s future cash flows and setting an appropriate discount rate require significant understanding of the firm’s business, its industry, and overall economic condition both domestically and internationally. In “The Little Book That Still Beats the Market,” Greenblatt (2010), a follower of Graham, introduces his “Magic Formula.” The magic formula greatly simplifies the process of finding the margin of safety. Greenblatt proves that the annualized return from his portfolio using the magic formula outperforms the US stock market annualized return from 1988 to 2009. The magic formula employs the value investing concept for stock selection with the following two factors: 1) return on capital, and 2) earning yield. The top 30 stocks that rank the highest in the two categories combined are included in the portfolio. In short, the magic formula seeks companies that have both a high return on capital and a high earnings yield. Another appealing component of this formula is its simplicity for implementation. No sophisticated programming is necessary to apply the formula. Greenblatt (2010) charges that the magic formula works the best in a long term investment. In his book, he shows that out of the years he analyzed the annual return from the magic formula against that of the market, there were some years that the return from the magic formula’s portfolio was lower than that of the market. However, when he compares an annualized return over 22 years from 1988 to 2009, the annualized return from the magic formula (19.7%) is remarkably higher than that of the market (9.5%). Greenblatt suggests that investors follow the magic formula for at least 3 years with yearly portfolio rebalancing to gain a return higher than the market. In this research, the magic formula is tested whether it is applicable to the Stock Exchange of Thailand (SET) and the Tokyo Stock Exchange (TSE) in Japan in addition to the US stock market. The returns of the portfolio using the magic formula are compared to the returns of their benchmark index. Sharpe ratio analysis is used to examine risk-adjusted return. This study contributes to general investors’ knowledge. It is especially beneficial for amateur investors because of the simplicity of the magic formula’s stock screening system. Amateur investors often need to invest through mutual funds due to their lack of knowledge. However, a lot of mutual funds underperform the market from time to time. Mutual funds charge management fee regardless their performance. Therefore, those who invest through mutual funds sometimes end up with suffering both loss in the investment and management expense. With the magic formula, amateur investors will be able to select stocks, or find the “margin of safety” on their own. This study is especially beneficial for value investors. Use of the magic formula simplifies a process of finding underpriced securities with good prospect. Value investors’ principle is to purchase a stock whose intrinsic value is higher than market value. However, finding a firm’s intrinsic value requires sophisticated skills and knowledge. Discounted Cash Flow method is one way to calculate intrinsic value, yet estimation of future cash flow and determination of appropriate discount rate associate with great uncertainty. On the other hand, the magic formula uses only two financial ratios, return on capital and earning yield, which are easily calculated. With the financial data and the spreadsheet, anyone can buy a stock which is undervalued at the time but has good potential that its price will move up in the future. Probably because of the popularity of Greenblatt’s bestseller book, his magic formula has been academically adopted to analyze and examine several markets. For example, Sign and Kaur (2013) calculated the magic formula’s risk-adjusted return using Sharpe ratio in the Indian stock market. They found that the portfolio of the magic formula was able to produce a higher risk-adjusted return than the market index for the most years from 1996 to 2010 but not the all the years. Their magic formula portfolio astonishingly generated 65.6% of the annualized return for the period tested while the market produced 12.2% of the annualized return. Similarly, Persson and Selander (2009) tested validity of the magic formula in the Nordic region, namely, Denmark, Sweden, Norway, and Finland. They found that the magic formula portfolio outperformed the market on average for a period from 1998 to 2008 although there were quite a few 12-month rolling periods during the 10-year span that the magic formula portfolio lost money.
Role of Tax Incentives in supporting Digital TV Transition in Indonesia
Martin Surya Mulyadi, Bina Nusantara University, Jakarta, Indonesia
It is a digital TV era nowadays, where several countries have been successfully move from analogue transmission to digital transmission. Indonesia also have a clear roadmap of its digital TV transition and plan to switch-off all analogue transmission in 2018. However, it is important to analyze what kind of support could be provided by government to ensure this transition program. Tax incentives are one support that could be provided by government. An investment-related incentive has been available, where 30% of incentives could be expensed equally in 6-years period. However, according to this research, incentives as tax credit (as provided by Canada and United States) increase the benefit of more than 20 times. Furthermore, it is suggested that government could consider to provide a labour tax credit, R&D tax credit and hotel tax exemption. For incentives directly attributable to users, VAT exemption will provide a 9% benefit for future users of Indonesian digital TV. Recent development of television (TV) transmission technology has reached a stage of digital transmission. With obvious major advantages compare to analogue transmission, it is a desirable policy choice by governments. Several countries have successfully make a transition from analogue to digital TV. Similar with other countries, Indonesia also have a clear roadmap of digital TV transition which according to its roadmap there will be analogue switch-off in 2018. However, considering the high cost of this transmission project, it is important for government to provide incentives to support this program. One kind of incentives that could be considered to be given is tax incentives, which according to previous research have a significant positive effect on growth (although in some conditions, some other researchers also concluded that there is a negative correlation between tax incentives and growth). As there are six main group of stakeholders in digital TV transition, focus on this research will be analyzing the alternative tax incentives to be given for content producers and equipment manufacturers. Although all six stakeholders are important, the main driver of success transition is the users. Users could considered a change to digital TV is they found a various and interesting program provided by digital TV, that’s why tax incentives for content producers is important. Furthermore, digital TV require a new equipment to be bought by user. Of course, it is important for manufacturers to produce an affordable equipment. Tax incentives will contribute to a lower equipment price. This research will provide a list for Indonesian government to be considered to support digital TV transition in Indonesia based on comparison with tax incentives have been provided by other country which have been successfully make a transition to digital TV. Development of digital technology for TV broadcasting reached a stage where it was clearly superior to analagoue in the early 1990s. Ever since showing its superiority to analogue, it is understood that the transition from analogue to digital transmission is inevitable (Brown, 2002). Due to significant advantages of digital transmission compare to analogue, making its adoption a desirable policy choice by governments (Colapinto and Papandrea, 2007). Digital TV provides an improved picture and sound quality with the reduction or elimination of ‘snow’ and ‘ghosting’, and offers the option of high-resolution images. This picture and sound improvement allow viewers to feel a sense of reality through high-resolution images, wide screen ratios and giving the viewers a feeling of being connected with it (Brown, 2002; Lee and Lee, 2006). Furthermore, energy consumption of digital transmission is also lower compare to analogue, as it is required less power. Similar with analogue TV, digital TV can be distributed by terrestrial, satellite or cable. Satellite and cable distribution systems have generally been converted to digital before terrestrial. One reason for the later start of terrestrial is the terrestrial digital broadcasting standard was defined later than satellite and cable (Brown, 2002; European Commission, 1999). Digital broadcasting starts in the United States in 1990s; started with pay-TV digital satellite service in 1994 and followed by digital terrestrial TV broadcast in 1998. UK have also started the digital terrestrial TV broadcast in 1998. Most major countries start their digital terrestrial TV broadcast afterwards, ie.: Sweden (1999), Spain (2000), Australia and Finland (2001), Italy (2003), etc. As mentioned previously, the digital TV transition is a desirable policy choice by governments because of its significant advantages. However, it is also a crucial public policy issue as it has major social, political and economic impact. Government need to determine the timing of digital switching, arrangements for switching-off analogue transmission and ensuring an affordable digital reception equipment (Digital TV transition requires viewers to purchase new reception equipment in order to receive the digital transmission). While digital cable and satellite TV is driven by commercial forces, digital terrestrial TV driven by government policy. There are two types of digital TV: Standard Definition TV (SDTV) and High Definition TV (HDTV). SDTV provides a similar picture with analogue TV in a wide screen format. While HDTV provides an improved picture in high resolution, provides cinema quality viewing, surround sound and closed captioning. It also provides different camera angles, access to websites, text-based information, etc. (Flew, 2002).
The Influence of Superior Customer Service on Customer Satisfaction of Mobile Telephone Subscribers in Nigeria
Dr. Akins T. Ogungbure, Troy University
The mobile telephone has been widely accepted as a means of communication in Nigeria. The market penetration of the mobile telephone is about 70% in Nigeria. The potential growth of mobile telephone services in Nigeria is very huge. Thus, it’s important to study the relationships between customer service, service quality, and customer’s intention to switch to a competitor and customer satisfaction and how these factors affect the choice of mobile service providers in Nigeria. This research seeks to explore some issues that are related to customer service and the quality of service customers deemed important and the intentions of the mobile telephone subscribers to switch to competitors. It’s expected that some knowledge would be gained about the importance of providing superior customer service and its influence on customer satisfaction. This study would also benefit the mobile telephone providers because the findings and results of the study would provide some measures for describing and evaluating the level of customer satisfaction with the providers and their service offerings. It is also expected that the results and findings of this study would provide valuable information for the policy makers and telecommunications regulatory agencies so that they can better evaluate the performances of the mobile telephone providers as it pertains to customer service and customer satisfaction. The data for the study will be collected by means of online survey questionnaires. The survey participants will be drawn from a random sample of private sector employees and university student population in four universities in Nigeria. This cohort has the appropriate characteristics of the largest market segment for mobile phone services. The sample size will consist of 800 respondents. The collected data will be analyzed using regression analysis and correlation statistics as appropriate. The findings will be reported and discussed and conclusions would be drawn. Many people in Nigeria are very optimistic and delighted about socio-economic transformation that is brought about as a result of the deployment and adoption of information and communication technologies (ICT) in the country. Information communication technology and mobile telephony adoption is fast enhancing its realization and has become imperative and indispensable (Friedrich, Grone, Holbing, & Peterson, 2009). Nigeria is one of the frontier countries where mobile telephone service revenue has been growing. The affordability, ubiquity, and scalability of mobile telephones have fueled the dreams and fantasies of technophiles among some people in Nigeria. Indeed, the mobile telephone has been described as God’s gift to Africa, previously the least connected continent in the world (Larmand, 2012). Nigerians use their mobile telephones for many purposes and many believe that the mobile telephone technology can stimulate economic growth from the basic micro level to large-scale businesses (Avgeroum, 2008). The deregulation and inception of Global System for Mobile Communication (GSM) in 2001 has fueled the growth of mobile telephone subscribers among professionals in particular and the general population in general. The population in Nigeria is estimated to be about 160 million people; it is considered the largest mobile telephone market in Africa with more than 110 million subscribers and market penetration stands at only around 70% in early 2013 (Budde.com, 2013). It is reported that telecommunication has contributed between 3-4 percent to Nigeria Gross Domestic Product (GDP), making the country to take the lead among Africa nations (Gabriela & Badii, 2010). The mobile telephone providers, customers, as well as policy makers are much challenged about how to sustain this growth and the important role telecommunication is playing in the economy since there is no adequate and consensus knowledge about the factors that are influencing GSM customer patronage and usage (Khatibi, Ismail, & Thyagarajan, 2002). The mobile telephone providers have come to realize that retaining their existing customers and providing superior customer service is as important as acquiring new ones; to this end, the providers are embarking on means to determine what factors are influencing customer patronage, loyalty, and customer satisfaction. Customer service is the provision of service to customer, before, during and after a purchase or service encounter. Zeithaml & Bitner (2003) defined customer service as a series of activities designed to enhance the level of customer’s satisfaction that is, the feeling that a product or service has met customer’s expectation. A firm’s success in the market place rests on its ability to attract, deliver, satisfy, and retain its customers. Customer satisfaction is the primary determinant of customer loyalty and subsequent customer relationship. Customer satisfaction is paramount to increasing a firm’s customer base, reducing customers’ switching to another company and increases the firm’s reputation. One path to achieve customers’ satisfaction is by providing superior customer service. Customer service is the gateway for a mobile telephone customers’ experience. Mobile telephone providers that can continuously provide superior customer service will be in a great position to take market share away from other competitors. Many researchers and practitioners have postulated that there are many interrelated factors that affect customer satisfaction. Some of the key factors include: service quality, brand image, customer value, customer patronage, and customer service. The impacts of these factors on customer satisfaction among mobile telephone subscribers in Nigeria have not been fully uncovered and understood; however, it has been reported that in some countries unique factors play active role in influencing customer patronage, loyalty, and satisfaction in mobile telephone markets (Gerport, Rams, & Schindler, 2001; Ranaweera & Neely, 2003). Other factors that have been reported to play significant role include service quality and brand image (Boohene & Agyapong, 2011); customer value (Varki & Colgate, 2001); switching cost, customer service, and price (Gerpot et al., 2001); trust and self-efficacy (Quan, Hao & Jianxin, 2010); and satisfaction (Martin-Consuegra, Molina & Esteban, 2007). These studies were largely done in developed world; however, not enough similar studies have been conducted in developing countries such as Nigeria. The few studies that were conducted in Nigeria looked at networks attributes, customer service and switching costs (Oyeniyi & Abiodun, 2009). It should be said that none of these studies considered the impacts of customer service as it relates to customer satisfaction among mobile telephone subscribers in Nigeria.
Secure Software Programming
Dr. Huiming Yu, Computer Science Department, NC A&T State University, Greensboro, NC
Nadia Jones, Computer Science Department, NC A&T State University, Greensboro, NC
Secure software programming is critical because a large fraction of security incidents result from flaws in the program design and the code. In this paper we will discuss two important topics that are writing secure code and secure program design. Secure program design consideration, insecure code, safe programming practices, insecure program and secure program design and implementation will be discussed. Related examples will be presented. Secure software programming is a vital aspect of information assurance, software engineering and computer security because it greatly impacts computer system security, network security and cloud computing security. Computer scientists have worked relentlessly to repair damage inflicted by hackers and malicious computer users. By using worms, viruses, Trojan horses, and various malicious tools, hackers have managed to make safe computing a more difficult task (Frank et al., 2006; ISO/IEC 2008). In order to effectively reducing computer security related issues, programmers must perform a variety of tasks to ensure that there are as few security leaks or vulnerabilities as possible in their programs. Software applications and codes should always be designed with security in mind from the very beginning (Eck 2006; Stallings and Brown 2007). An ideal application will always be designed from top to bottom and demonstrating secure practices throughout the entire development. Poor programming practices often result in many security vulnerabilities. Buffer overflow, denial of service, cross-site scripting, code injections, invalid input and improper error handling are all the results of insecure program (Howard and LeBlanc 2002). Awareness of the many security vulnerabilities, along with accounting and handling of all error states are crucial steps to develop secure software products. Programmers use various testing techniques to identify and eliminate as many bugs as possible from a program. Variations of common errors and likely inputs are one of the testing strategies that programmers use to minimize the number of vulnerabilities found (ISO/IEC 2008; Howard and LeBlanc 2002). A system is secure if its resources are used and accessed as intended under all circumstances. Unfortunately total security cannot be achieved, but writing secure code and design secure program can make security breaches less likely to occur. In the following sections of this paper, the details of secure software programming and practices will be discussed. The conclusion will also be given. Secure software programming includes broad ranges of theory, knowledge and techniques. Writing secure code and secure program design will be discussed in the following sections. Writing secure code is a critical step for developing secure software products. Insecure Code and Safe Programming Practices will be discussed. There are various attacks that occur because of the vulnerabilities stemming from insecure code. As stated earlier, some of these attacks may include buffer overflow, denial of service, cross-site scripting, input validation errors, integer overflows and underflows, as well as a variety of code injections. In this paper the focus will be on integer overflow and input errors. These are two of the most vulnerable attacks, yet two of the easiest to avoid by simply implementing proper coding techniques. An integer overflow is when a mathematical operation causes an integer to flow outside of its boundaries. A programmer is always able to choose the type or size of variables that will be used in code. When doing this, he or she should always take into consideration the numbers that will be used in the program, possible operations that may affect these numbers, and whether signed or unsigned bits will be used. If a user does not properly store an integer value into its correct memory space, an integer overflow will occur. Although the class presented in Figure 1 would run appropriately given most numbers, if there was a case where a user entered a value, x, that is greater than (2,147,483,647-1)/3, the program would generate the integer overflow error because the largest number that can fit into an integer memory block is 2,147,483,647. The sequence being calculated is, x multiplied by 3 plus 1. At this point, an incorrect value may be printed to the screen, and it would not be stored into memory correctly. These types of errors will cause the obvious issues such as incorrect data being output, truncation, or infinite loops. These minor issues can eventually lead to more serious problems such as system crashes, possible exploitation, and other security breaches such as data corruption and even malicious software being executed onto the computer. An input error is when a programmer fails to validate all possible inputs within an application such as a string or integer. Input errors are less likely to occur when a developer checks all possible inputs of a program to ensure that the inputs meet the specifications and that these specifications are secure before running the application. In code, an input is considered any terminal where information will enter a system, while validation is defined as “the act of testing for compliance with a standard” (OWASP 2011). Input validation errors usually occur when a coder does not examine possible input strings before they are parsed. Once a string is passed into a parsing method, it is too late to correct any type of errors that may occur due to this non-validation. This program simply asks that a user inputs an integer. If a user enters a value other than an integer such as a long or even a string, an error will occur. For example, a user could possibly enter the string “thirteen” rather than “13” which may cause harmful effects that the user may not be aware of until after the program is run. If a software engineer wants to develop secure software applications he/she must write secure code. Safe programming practices include several steps that are very simple and are extremely important when it comes to ensuring that code is not vulnerable to attacks. It is good practice to always check return values for errors since most library and system calls return an indication of their success or failure. Checking for errors can help a software engineer avoid more serious problems. Reviewing one’s code and modeling threats, input and output validation are recommended for writing secure code.
Understanding Impulsive Buying Behavior in Online Retail
Dr. Asmita Shukla, Indian Institute of Technology Bhubaneswar, India
Rojalin Mishra, Indian Institute of Technology Bhubaneswar, India
Nowadays consumers shopping pattern has changed. With the growth of e-commerce, online shopping is also gaining popularity.Retailers’ social media activities influence a significant proportion of users to try new products and make unplanned purchases. As a result, the concept of impulsive buying has become great interest to marketers. The present research proposes the effects of consumers’ brand consciousness,attitude towards online shopping and Facebook advertising on e-impulsive buying. The present research also examined the mediating effect of urge to buy and moderating effect of gender on the above mentioned relationship. The study was conducted on 207 respondents (Males=77 and Females=130). As a result of this study, attitude towards online shopping and Facebook advertising had a positive effect on e-impulsive buying, and urge to buy mediated the relationships. Similarly brand consciousness had a positive effect on e- impulsive buying, and urge to buy mediated the relationship. Gender was found to be a significant moderator between attitude towards Facebook advertising and brand consciousness, and e-impulsive buying. But gender was not significant as a moderator between attitude towards online shopping and e-impulsive buying behavior. As online retail sale is increasing, companies are trying to take advantage of online shopping by incorporating strategies to increase impulsive purchases. With the advancement of technology and increased experiences in online marketing, websites have become very innovative in encouraging impulsive buying. As social media grows exponentially, the terms “like” and “share” are common for everyone. Social media is a quick and easy medium for marketing and connecting with customers. Message can spread virally just with a click or a simple post about a product. Social media sites like Facebook and Twitter are becoming more than ‘just a place’ for communication. Now with the beginning of e-commerce, social media has become the new intermediate for “buying” and “selling”. Facebook holds a lot of prospective for online marketers when it comes to stimulating impulsive purchases. Users don’t visit Facebook with shopping intentions, but ads and branded pages persuadeonline users for emotionally driven purchases. Offers for reasonably priced and brandedproducts, as well as exclusive deals or limited-time offers tend to work well on facebook. Such impulse buy, which may have seemed to an immediate decision on consumers’ part, was likely the result of a targeted and strategic decision on the part of the marketers. Marketers found consumers in the right place at the right time with the right product in the frame of mind to buy it. Impulsive buying is a sudden and immediate purchase with no pre-shopping intentions either to buy the specific product category or to fulfill a specific buying task. The behavior occurs after experiencing an urge to buy and it tends to be spontaneous and without a lot of reflection (Beatty and Ferrell, 1998). Many researchers have provided theoretical frameworks for examining impulse buying related to psychological variables (e.g. personality, emotion, motivation) and situational factors (e.g. available time, money) in a shopping context (Beatty and Ferrell, 1998; Rook and Fisher, 1995). This implies that consumers’ engage in impulse buying while shopping can be encouraged by any psychological factor or situational factor. Verhgen and Dolen (2011) showed significant effects of store attractiveness, enjoyment and online store communication style mediated by urge to buy on impulsive purchase. Another Study showed a positive effect between brand-orientation and shopping enthusiastic consumers. Consumers have a keen interest in new products which causes them to alter their buying decisions (Walsh, et al., 2001). Understanding advertising beliefs and attitudes is important because they affect consumers’ brand attitudes and purchase intentions (Mehta, 2000). Wang and Sun (2010) found that belief was asignificant predictor of consumer attitudes toward online advertising.Social media has also influenced consumer behavior from information acquisition to post-purchase behavior such as dissatisfaction statements or behaviors (Mangold & Faulds, 2009) and patterns of Internet usage (Laroche et al., 2012).A study conducted by Well, et al. (2011) examined the relationship between website quality and online consumer behavior. Result indicated website quality have positive influence on online consumer behavior.Another study (Liao, et al., 2009) investigated the factors of marketing communications and consumer characteristics that induce impulsive buying behavior. The result indicated that sales promotion strategy and product appeal have significant influence on impulsive buying.Park, et al. (2012) explored the relationship between web browsing and impulsive buying for apparel product in online shopping context. They studied about two type of web browsing (hedonic and utilitarian). They found that utilitarian web browsing has a negative effect on impulsive buying and hedonic web browsing has a positive effect on impulsive buying behavior. Kollat and Willet (1967) also found that women tend to engage in more impulsive buying as compared to men. Impulsive buying plays vital role in selling which leads to retailers’ profit. Retailers try to attract impulse purchases by using various promotional techniques. Therefore with consideration to this reality that impulse buying is increasing, it is necessary to study the factor influencing impulsive buying behavior. The present study proposes a conceptual framework (shown in Figure 1) of consumers’ e-impulsive buying behavior emphasizing the impact of attitude towards online shopping, attitudes towards Facebook advertising and brand consciousness. This study also investigates the mediating effect of urge to buy impulsively and moderating effect of gender on the above mentioned relationship. HYPOTHESIS: On the basis of the review of literature and research questions the following hypotheses were proposed: H1: Attitude towards online shopping has a positive effect on e-impulsive buying. H2: Attitude towards Facebook advertising has a positive effect on e-impulsive buying. H3: Brand Consciousness has a positive effect on e-impulsive buying. H4: The effect of attitude towards online shopping on e-impulsive buying will be mediated by urge to buy.
The Participatory Management of Community for Restoration and Conservation of Ecology System, Way of Life, Local Wisdom, and Identity of Mangrove Forest Community
Dr. Nithipattara Balsiri, Dhonburi Rajabhat University, Bangkok, Thailand
The purposes of this research were: 1) to study ecology system, way of life, local wisdom, and identity of mangrove forest community, 2) to develop the model of restoration and conservation of ecology system, way of life, local wisdom, and identity of mangrove forest community, 3) to develop the participatory management model of community for restoration and conservation of ecology system, way of life, local wisdom, and identity of mangrove forest community. Research methodology is the participatory action research. This research area is Bangkuntian district, Bangkok province. Interview schedules, and observation forms were employed for data collection. Content analysis and induction were employed for data analysis. The major research results were : 1) the ecology system of mangrove forest community consisted of the producers, consumers, and decomposers; 2) the way of life were crap farm, shell farm, shrimp farm, fish farm, tourism; 3) local wisdom were shrimp paste, food preservation; 4) the model of restoration and conservation of ecology system, way of life, and local wisdom of mangrove forest community consisted of learning community model, ecology tourism community model, and sufficiency economy community model, 5) the participatory management model of community for restoration and conservation of ecology system, way of life, and local wisdom of mangrove forest community comprised of community participation of ecological resource management, cultural resource management, agricultural resource management, learning society management, tourism marketing management, and tourism services management. Mangrove Mangue from the word that in Portuguese society that crop up along the coast. The clay is sometimes referred to mangroves that intertidal forest because the forest is located in the coastal area between high tide and maximum low tide, so the forest is the forest on the beach playing of the sea in tropical and estuaries, that borders the sea. The smooth flow to the area to be flooded with sea water, or sea water, at least in the most advanced, most deciduous trees. Environments typical of mangrove forests are very different from the wild, others, particularly the soil due to a clay in the steppe soils are fertile, high-nutrient water from various sources such as from coastal erosion, and water streams. The other part comes from the fossil in a particular area of mangrove leaves that fall into a pile of phytoplankton and algae and salinity of the water is a relatively low level due to freshwater flow, was mixed with sea water. As a result, this area is brackish water salinity of the water also varies according to water level fluctuations, such as physical characteristics affecting the distribution of plant species that are dependent, as seen from the mangrove forest. Various sources of individual mangrove trees can grow under difference conditions, such as a clay areas are flooded with a shallow clay areas are flooded for some time. Thus, we see that each area of mangrove plants vary. Plants will be in the area where the clay has its roots deep to support a large number of healthy roots of these plants help support them upright when the pie is not to overthrow the wind or waves. Plants such as mangroves, the seedling can be grown from the beginning until she is ready to spout roots and grow stronger as soon as they fall to the ground. The spread will vary with different species of mangrove forest land, common factors such as the rise and fall of water, soil, etc. So the nature of the areas where the clay into a coastal or river. We will find the mangrove and adjacent to a shallow clay with water to keep the plants found in Samae and beans next to a stiff clay. The flooded vegetation is sometime found in Tabun and astringent and transparent distribution of mangrove forests are divided into districts called Zonation is divided as follows: mangrove forests, Tabun and transparent nature, Tatuem forests and astringent, and Samed forest. Mangroves are the forests situated at the confluence of land and sea in the subtropics and tropics. Mangroves are trees or shrubs that develop best with low wave energy and shelter foster deposition of fine particles enabling these woody plants to establish roots and grow. Mangrove forests are architecturally simple compared to rainforests, often lacking an understorey of ferns and scrubs, and are ordinarily less species than other tropical forests. The global distribution of mangroves indicates a tropical dominance with major latitudinal limits relating best to major ocean currents and seawater. Mangrove forests possess characteristics that make them structurally and functionally unique. Morphological and ecophysiological characteristics and adaptations of mangrove trees include aerial roots, viviparous embryos, tidal dispersal of propagandism, rapid rates of canopy production, frequent abaence of an understorey, absence of growth rings, wood with narrow, densely distributed vessels, highly efficient nutrient retention mechanisms, and the ability to cope with salt and to maintain water and carbon balance. Ecosystem characteristics include comparatively simple food webs containing a mixture of marine and terrestrial species, nursery grounds and breeding sites for birds, reptiles and mammals, and accumulation sites for sediment, some contaminants, carbon and nutrients. Mangrove forests are the primary features of coastal ecosystems throughout the tropical and subtropical region of the world. Various kinds of fauna including shrimp, fishes, crabs, mollusks, mammals, reptiles, birds, insects and micro-organisms are found in mangroves ecosystem. People in mangrove communities have utilized mangrove ecosystems for their food resources, firewood, charcoal, timber, and other minor products. In Thailand mangrove forests are found in 23 provinces of the coastline. It was estimated that 168,682 hectares of mangrove forest area. More than 50% of mangrove forests which cover an area of 199,217 hectares were lost during 1961-1993. Various activities carrying out in mangrove forest area such as shrimp farming, tin-mining activities, mangrove over-exploitation, industrial area and settlements leading to the reduction of mangrove forests. Among these activities, shrimp farming, tin-mining and overexploitation are major causes of the loss of mangrove forest area. (Sremongkontip, Hussin, and Groenindijk, 2000)
Agent Based Approach for Banking Investment Ratios
Iris Lucas(1), ECE Paris School of Engineering, Paris, France
In order to better understand how banks individual decision rules impact interbank market and how global Central Banks decision could impact individual liquidity position of institutions, present paper proposes to highlight the structure-property relationships at different scales by using the structure of agent-based model in a two-class environment. The data corresponding to first class are collected from existing biggest and systemic banks whereas the agents balance sheet of second class are fictitious and built from average values. The model is marked by a set of behavioral interbank descriptions and agents have to trade off their liquidity needs vs their financial gain. The model proposes three kinds of tradable assets with different risk exposures and corresponding yield rates. Applied to the European interbank network, the present cases introduce different liquidity allocation scenarios between financial assets. From macro variables and expected cost quantifying for regulator, the scenarios highlight the fact that system stability is directly related to individual proportion of ratios of low-risk securities versus ratio of high-risk assets. Financial markets are constantly accused for current state of economy but despite blaming the game, one should also blame the gamers and the rules. Between September 2012 and February 2013 the ECB (2) has injected more than 1000B€ in banks through LTROs (3) and yet, nothing seems to have been transferred to European economy. Banking system appears to become a giant liquidity sponge, a black box in which one may wonder where injected liquidities by regulators are really going. Today the current banking system prefers to bet its money on new emergent government bonds rather than on local firms. So the view can be taken that either banks are just disconnected from their local economy, or that banking system is not efficient anymore. This does imply that there is no guarantee that a crisis may not occur again if banking system is maintained without modifications. Then the real question is to know if it is still the 2007 subprime crisis which is responsible of the current financial dysfunction or if it is just because actual system is not correctly functioning anymore. In , Kirman explains that factors responsible of subprime crisis, “contagion, interdependence, interaction, networks and trust” are absent from current economic models. All of which cannot be truly included in macroeconomic models where theory is based on aggregate variables. In 1976 , Lucas already demands that models be micro-founded and Grasselli  resumes Kirman’s call for new class of agent-based models and presents them as an interesting solution of the aggregation problem. The Agent-based models are defined in  as “the modelling of systems as a collection of autonomous interacting entities (agents) with encapsulated functionality that operate within a computational world”. By giving agents individual decision rules (ie better individual reactive sensitivity to environment), not only agent-based models represent a strong possibility to better understand financial system underlying dynamics, but they also include model-guided comprehension when regulators are designing new rules for the financial system. Recent authors, Halaj , Neuberger , have explored this approach to relate it to the impacts of regulators policies on banking system. Further motivation for using agent-based approach to deal with this issue is also found in Chen and Liao  “instead of extraneously imposing a specific kind of behavioral bias, e.g., overconfidence or conservatism, on the agents, we can canvass the emergence and/or the survivorship of this behavioral bias in the highly dynamic and complex system environment by computer simulation. Agent-based modelling may lead us to some viewpoints by pushing beyond the restrictions of the analytical approach.”. This paper attempts to bring to the literature another reflection of agent-based modelling applied to interbank system. Aside authors who studied impact of interbank network structure on financial stability as Georg , Nier et Al. , Battiston et Al. , attention has been here focused on development of a framework allowing to evaluate both the impact of individual agents decision rules on global system stability and the impact of global decision made by regulators on individual entities behavior. An agent-based model representing interbank market whose agents are banks with retail and market activities is developed. It is supposed that interbank network can be represented by its core, constituted by the strong systemic institutions, and its periphery which includes all the other banks, whose individual failure cannot produce system collapse. This is why interbank system network is divided into two classes: the first one, made up of real systemic entities balance sheets, and the other one whose agents are created on the base of fictitious (and unique) balance sheet distributed around real average values. Banks see their deposits collection and liquidation follow a stochastic process whereas refinancing process is autonomously managed by agents themselves throughout simulations. Agents refund themselves through two interbank channels, lending and asset trade activities. They orientate their choice to deal with one bank rather than other one thanks to a simple heuristic corresponding to their preferences. Each agent in the model is confronted to its liquidity needs and financial gain. For entities with available cash an investment activity is proposed where they have to choose between save a part of this cash and allocate it in different kind of assets. In  the analysis of impact of decision rules differentiation on system stability has clearly shown that heterogeneous investment policies strongly affect the network. In  a relation is established between individual ratios low-risk assets holding and well-behaved interbank network through cushion investment scenario, partly inspired from Value-at-Risk concept. In this paper an extensive study is undertaken by influencing agents in different investment policy scenarios. Against Georg’s paper  where “banks optimize a portfolio of risky investments and riskless excess reserve according to their risk and liquidity preferences”, present study does not go that far in term of “optimality”, but analyzes several scenarios of allocation between three assets. Finally, it is found that the proportion of individual ratios of low-risk securities vs ratios of high-risk financial assets truly affects banking system stability and present a very first step in what will be quantifying value of healthy ratio. The paper is organized as follows. After presenting the model, Section 4 provides simulation results. First some basic dynamics of studied scenarios are shown from graphical displays out of simulation in Section 4.1. In Section 4.2 the distribution of agent ratios value, and its effect on overall system response and the cost for regulator, are discussed. Conclusion is given in Section 5.
The Inflation Phenomenon: Is it Still a Threat for Economies? The Success Story of Turkey for Fighting Inflation
Dr. Billur Guner, Istanbul Kultur University, Istanbul, Turkey
Dr. Nebile Korucu Gumusoglu, Istanbul Kultur University, Istanbul, Turkey
Monetary stability in an economy is of vital importance for the economic life. The increase in general price level –inflation- is accepted as detrimental to the country's economy. The close relationship between money supply and inflation is generally accepted by economists and also in the economics literature strict consensus exists about the relationship between interest rates and inflation, though, the issue of direction and causality of the relationship is still disputed. In recent years exchange rates also have become one of the main explanatory variables of inflation especially for developing countries. Overvalued exchange rate policies seen in developing countries caused severe financial crisis and led to to the emphasis on the relationship between exchange rate and inflation rate. In that context, Turkey which experienced high inflation rates in 1990’s is a good example of a successful fight for inflation. The success story of Turkey gave us some lessons and new receipts especially for the developing countries which are facing the same problems. In this study the complex relationship between inflation, money supply, interest rates and exchange rates that are diffucult to understand with a static perspective will be analysed in a dynamic point of view. The failures of the monetary targeting policies like monetary targeting, interest rate and exchange rate targeting have forced many countries to seek a new monetary policy to deal with problem of inflation. As a result, in 1990’s and 2000’s Central Banks’ of different countries have adopted inflation targeting regime. The Reserve Bank of New Zealand was the first central bank that adopted inflation targeting in 1990. The success of Reserve Bank of New Zealand caused other countries to implement inflation targeting regime. During the years 2002-2006, Central Bank of the Republic of Turkey (CBRT) has also implemented implicit inflation targeting and at the beginning of 2006 formally moved onto an inflation targeting regime (Uğurlu, Saraçoğlu, 2010). Nowadays the low inflation rates of Turkey compared to 1990’s tells us a success story with unique features. In this study the theoretical backround of the inflation is analysed and the reasons behind this successful fight for inflation are also identified. The continuously and substantially increases in the general price level is defined as inflation which leads to the declining value of purchasing power of money and which is accepted by economists detrimental to the country's economy. Inflation causes decreases in the purchasing power of money, worsens the allocation of resources and affects employment and savings negatively (Birinci, 2011). Inflation is basically a monetary phenomennon and according to the monetarist economists inflation happens because of increases in the money supply. If Central Bank issues money that exceed the demand for money at the existing price level, which in turn will increase the demand for goods and services as well as the price level in the country. Altough there is a consensus that there is a close relationship between inflation and interest rate, the direction of the relationship is still a point at issue. The theory of interest by Irwing Fischer claims that the money rate of interest rises by the anticipated rate of inflation or decreases by the anticipated rate of deflation. Inflation decreases real money balances, which will cause a decline in wealth and stimulates savings (Mundell 1963). If the nominal rate of interest increases in response to an icrease in the expected rate of inflation, this will cause an increase in savings to balance equilibrium in the money market and results in an increase in savings. To increase the investment level correspondingly or to maintain equilibrium in the goods markey, real interest rates must fall. So the real inteerst rates are inversely related to the expected rate of inflation especially in the short run (Gylfason, 1981). On the other hand, since the rise of interest rates is an element of increasing costs, it also leads to cost inflation. In addition, because of increased interest rates, the investors who hold bonds are tend to be spend more which in turn leads to a demand inflation (Alacahan, 2011). In recent years, exchange rates are also considered as one of the main descriptive varibales of inflation. Especially, the financial crises which emerged as a result of overvalued exchange rate policies in developing countries led to attach importance to the relationship between exchange rate policies and inflation. In 1990’s policy makers and academicians disscussed alternative exchange rate regimes for developing countries and suggested flexible exchange rate regime which is consistent with inflation targeting (Peker, Görmüş, 2007) The relationship between exchange rate and inflation is explained with pass-through effect. According to the pass-through effect, the changes in exchange rates firstly effect import prices. Later, the price changes in imported goods effect consumer and producer prices. So, the reflection of exchange rate changes affecting export and import goods in domestic prices are defined as “pass-through effect”(Alacahan, 2011). Turkey spent the period between the years of 1970-2000 with a serious inflation problem. The avarage inflation rate during this period was on avarage over 80 percent. At the end of the 1980s, although the necassary requirements put forward for an effective monetary policy, because of lack of fiscal disipline and economic and financial instability, the monetary policy has not been implemented effectively. Turkey dealed with inflation problem for many decades and studies about the dynamics of inflation in Turkish economy in the last thirty years focused basically on many factors. The first factor was high public sector deficit. Financing this deficit with moneterazation or by borrowing caused an increase in the interest rates. The second factor was the increases in money supply, which led to increase in aggregate demand. The third factor was the political instability which caused economic instability and fragility, worsening the inflationary expectations. Chronic inflation rates also caused devauluation of Turkish Lira which increased imported input prices and imported intermediate goods prices (Kibritçioğlu, 2002). After 2001, Turkey experienced increases in the money supply, decreases in the interest rates and high rates of economic growth on avarage during the inflation target regime period. Turkey's fight against inflation has unusual features and it was contrary to the monetarist’s theory. So it is important to understand the causes of the dynamics of the disinflation period especially for the developing countries which have similar conditions like Turkey. The inflation rates successfully started to decraese to a single digit levels after the implementation of implicit inflation targeting regime in 2002. After the pre-conditions had complemented in 2005, Turkey began to implement inflation targeting in 2006.
Petro-Dictatorship, Insurgencies, Boko Haram-Terrorism and Threats to U.S. – Africa Energy Security Future
Professor Augustine A. Ikein, Niger Delta University of Nigeria
This paper posits that Africa is the treasure base and home of major strategic resources that attracted European’s and Americans to its shores. Historically, it was the resources of Africa that provided the right lubricant that turned the wheels of industrial revolution and development in Euro-America. It also true that Euro-America investment and technology transfer helped to develop Africa. Africa is still endowed with abundant energy resources like oil and gas to meet the energy needs of America. Harnessing the endowed energy resources to meet the development needs of African Nations is the challenge that must be dealt with. The production and export of the energy resources are now being challenged by a new wave of insurgents, militants and terrorists who may create a new form of resource control and petro dictatorship to threaten and disrupt the flow of energy supply to major consumers like the United States. The emergence of Boko Haram and other terrorist groups, if allowed to continue, could threaten the energy security of both the United States and producer Nations in Africa. The paper also high lights the origin of terror and views on conspiracy theory and activities of Boko Haram, its impact on Nigeria and a possible spillover cost and implications for the United States and the rest of the world. America should take the lead to partner with energy kingpin, Nigeria the leading power of African axis of oil and global power that is directly challenged by the Boko Haram. Africa is endowed with abundant energy resources in the form of fossil fuel, coal, hydro power, uranium, bio-mass, and other renewable energy sources like solar wind and geothermal power. Africa is a great land that is divinely blessed with many strategic resources that are an attraction to all major resource consumers around the world. It is widely believed that Africa is the treasure base of the world’s strategic resources. Euro-America had relied on Africa for their strategic resource needs throughout history. Historically, the strategic resources of Africa have served as lubricant to the wheels of industrial revolution and economic development in Euro-America and at the same time the inflow of Euro-American investment and technology have contributed immensely to the developmental needs of Africa. European and American Investments had exploited the oil fields in Africa since discoveries were made in the 1950s before the Asian tigers like China, Japan, India and Korea came into the scene. So Africa and Euro-America have had convergence in the common interest of resource security and development.Harnessing energy resources for sustainable development in Africa is a challenge to most African countries endowed with energy resources. The production and export of the energy resources like oil and gas is now being challenged by the activities of insurgents, militia groups and terrorists thus creating a new form of resource control and petro-dictatorship. It means disruption of free flow of energy to major consumers like the United States; therefore there is need for U.S.-Africa partnership to check political and economic disruption taking place in Africa and forge out a common platform to stop insurgencies and terrorism in Africa. Emergence of Insurgencies, terrorism, insecurity and threats to our energy security future Africa, the world’s major energy treasure base is now under the threat of terror. Africa’s major oil producers like Nigeria is now being challenged by Insurgencies and terrorism with real potency to disrupt oil flow to disrupt development. The global socio-political and economic environment is now threatened by active non-state actors named insurgents, terrorists, militias, pirates and other forms of criminals operating on the land and sea and they pose new threats to global peace and security. Added to this trend are the changing patterns and attitudes of legitimate sovereign state actors and their leaders competing for new alliances in global politics. Another factor is that those actors’ endowed with technological skills in nuclear power, economic power and military might are also adding more threat to international peace and security. The technologies of these super powers have filtered into the hands of militia groups with high potency to foment cyber crimes, terror and bombings, which may evolve into a possible eruption of new scourge of war that will be hardly controllable by some state actors that are less technologically endowed except there is true partnership among nations to deal with the actors interrupting energy supplies on the land and sea. Another concern for the consequences is the increasing insecurity in global politics and the diminishing respect for international rule of law as well as the declining mutual trust among world powers to the detriment of the weak nations and vulnerable masses of humanity around the world. There have been historical antecedents to the politics of war and peace. The increases in insurgencies and terror have negative impact on all sectors of the world just like the world wars which began in Europe were felt around the world. Europe has remained historically a well known region for wars and unrest. The 30 year old war that led to the signing of the 1648 Treaty of Westphalia, the 100-year old war, the First World War and the Second World War were all Eurocentric in origin, design and execution, but extra-European in impact. However, the modern terror wars are not all of Eurocentric origin but nurtured by (non-Eurocentric areas of the world particularly the increasing incidences of terror which constitute threats to resource security interest of Europe and America. In Africa and Nigeria in particular at all levels, Nigerians are now grappling with Boko Haramism, which is compelling Government to seek a balance between national sovereignty and the threats of the Boko Haramists, on the one hand, and between national sovereignty and global security, on the other hand. The anticipation is that the world of the future in which Nigeria, sooner than later, will emerge, not simply as a regional influential power, but as a great power. The Israeli Ambassador to Nigeria stated that Nigeria will overcome Boko Haram. This can happen if political institutions are strengthened to make Nigeria a real partner with the world major powers to avert wars to secure the peace and security around the world. We recall that even though the United Nations was formed to avert the scourge of war, wars are still brewed and fought till this day. The UN appears inadequately prepared to deal with the scourge of global terrorism and now we have the proliferation of terror wars, conflicts and insurgencies such that nations like Somalia can hardly stop it. Do we need more Somalization of nations and anarchy?
Towards a Single Conceptual Definition of Trust
Theresa Robinson Harris, Pepperdine University, Los Angeles, CA
Trust is a very complex and elusive phenomenon. This paper reviewed 28 randomly selected articles and found 28 very different and often conflicting definitions for the concept in the literature. This indicates that there is confusion and a high level of differences and discrepancies with how the concept is being defined and measured. The paper argues for a single conceptual definition of trust and identifies several broad categories and key variables resulting from the literature review which can be used to develop a guiding framework for future studies. Trust within organizations has been the subject of considerable question and debate over the past decade. Financial crisis in the United States and elsewhere has contributed to this heightened focus. The increased attention has contributed to the passage of considerable public policies geared towards restoring trust in organizations. Likewise, the World Economic Forum (2005) reported that public trust in organizations is steadily on the decline (Appendix A). The figure shows countries with steady or declining trust since 2004. Within the United States, the trust needle moved drastically in the opposite direction from a net rating of 10 in 2004 to a net rating of negative 9 in 2005. This was the first time since the survey started in 2001 that net trust in global companies went negative (World Economic Forum, 2005). Similarly, a Harvard Business Review (2009) article on trust summarized the environment after the economic crisis in 2009 as one of trust deficit, with the economy at a standstill waiting for confidence to return to pre financial crisis levels (Moyer, 2009). It is evident from these and other studies that trust is important to every aspect of our lives, but do we really know what we mean by trust and are we really comparing apples to apples when we conclude that trust is on the decline? The literature (Huang and Wilkinson, 2006) is clear that all business and personal transactions involve some elements of trust and that trust is needed when there is uncertainty or when we must depend on others to get things done. This interdependence may define not just our personal and professional relationships but also our relationships within the context of the overall global community. The better we are able to define and measure this phenomenon the more we will be able to understand it. Improved understanding will lead to healthier organizations and stronger relationships within organizations. Within a business setting, the returns from having a high trust environment can result in big payoffs for the organization. For example, when employees truly believe that their managers follow through on promises and when they practice what they preach, the organizations are often more profitable (Simon, 2002). Other researchers (Schoorman et al., 2007) conclude that companies are often able to predict sales, profits, or even turnover rates when employees have trust in the leadership of the company. The importance of trust in organizations and in working relationships are therefore widely recognized and acknowledged. Despite of this, trust continues to be a very elusive concept to many researchers and practitioners, and to many others within the organization. Trust also impacts an organization’s ability to attract talent. Since these organizations depend largely on human capital to gain a competitive edge it is important to understand how trust can be used to help the organizations achieve its objectives. This is in line with Barney and Hansen 1994; Braddach and Eccles; 1989; and Creed and Miles 1996) who argue that trust can be a key source of competitive advantage if organizations are able to understand and better appreciate the phenomenon. This article discusses the importance of trust in organizations and outlines the benefits of developing a single conceptual definition for the concept which can be used as a guiding framework for future studies. This paper also reviewed 28 randomly selected studies on trust and provides findings and recommendations. As mentioned earlier, the importance of trust within organizations has been widely documented. It is considered a major competitive advantage (Barney and Hansen, 2004; Creed, and Miles 1996; and Wicks et al., 1999), it facilitates cooperation and collaboration (Zucker et al., 1996), it helps with conflict resolution (Parks et al., 1996), it improves job satisfaction (Andeleeb, 1996; Rich, 1997), and it increases organizational commitment (Yamagishi et al., 1998), among other important attributes. Unfortunately, the meaning of this concept is not widely understood since there is no widespread agreement regarding what it really means. As a result, individual studies often define and measure the concept differently (Homer, 1995) often resulting in differences and discrepancies in findings. Due to this confusing and often deficient practice with regards to defining and measuring the concept, it is difficult to fully understand the impact of this important phenomenon on the organization. It is also challenging to understand how it can be developed, maintained, or restored. Likewise, it is difficult to fully understand the challenges and barriers to trust. This also makes it challenging to compare and integrate research findings (Romero, 2003) as well as advance the knowledge in this area. There are many definitions of trust in the literature. Hosmer (1995) summarized some of the earlier research and concluded that many economists, psychologists, sociologists, and management theorists appear united in the importance of trust in the conduct of human affairs. Early definitions of trust described it as being essential for stable social relationships (Blau, 1964); as an important determinant of behavior (Rotter et al., 1972); as public good necessary for the success of economic transactions (Hirsch, 1978); as being indispensable in social relationships (Lewis and Weigert, 1985); and vital for the maintenance of cooperation in society (Zucker, 1986). When destroyed; relationships, societies, and organizations can falter and even collapse (Bok, 1978). Trust has also dominated the leadership literature in recent years (Podsakoff et al., 1996; Kirkpatrick and Locke, 1996; and Schriesheim et al., 1999) but there is lack of agreement on a suitable definition of the concept in the leadership literature as well. (Homer, 1995). Barber (1983) noted that it is often assumed that the meaning of trust and of its many apparent synonyms is so well known that it can be left undefined or to contextual implications. Zucker (1986) stated that recognition of the importance of trust has led to concern with defining the concept, but the definitions proposed unfortunately have very little in common. Shapiro (1987) concurred, and added that the conceptualization of trust has received considerable attention in recent years however the heightened attention has resulted in a confusing mix of definitions which is applied to a host of units and levels of analysis. Rotter (1967) defined trust as a generalized expectancy held by one individual that his words or promises can be relied on. Contrary to Rotter’s belief Gambetta (1998) identifies trust more as a calculated decision to cooperate with others. However, definitions of trust have evolved over time and more recent definitions such as that of Sitkin and Roth (1993) looks at trust as a belief in someone’s competence to perform a specific task under specific circumstances. Rousseau et al., (1998) define it as a psychological state comprising the intention to accept certain vulnerability based upon positive expectations of the intentions or behavior of the other person. Another widely used definition is that of Mayer et al., (1995), who identify trust as the willingness of a party to be vulnerable to some actions of another party on the expectation that the other party will perform the action, irrespective of the ability to monitor or control the party. This is based on the theory that trust is grounded in relationships rather than individual tendencies or characteristics. There have been many other attempts to define trust in the literature. These attempts aims towards a comprehensive definition for the concept; Brenkert, (1998) suggested that it as an attitude or disposition to behave and respond in certain ways; to accept certain risks of harm or injury from another person on the basis of a belief that the other does not intend to cause harm, even though he/she could. Fukuyama (1995) thinks that it is an expectation that arises within a community of regular, honest, and cooperative behavior, based on commonly shared norms. Currell and Judge, (1995) believe that it is an individual’s behavioral reliance on another. Uslaner (2002) sees it as a moral issue and argues that it is moralistic in nature since we have a moral obligation to trust because trust is a precondition of civic life. Cohen and Dienhart (2012, p. 12) agree stating that there are moral and amoral conceptions of trust, concluding that the moral conception is essential.
Economic Diplomacy versus the Influence of President’s Authority
Sinisa Pokos, Megatrend University, Serbia
During a time of global crises, economies and the world’s most powerful leaders are struggling to survive and maintain the standard of living at a decent level. Every country is trying to protect its inhabitants and interests on the global market. Russia, one of the world’s leading country, represents a type of threat for its resources to the other leading countries. On the other side is Ukraine. This is not a leading country nor is it as wealthy as Russia. These two countries are dealing with conflicts because of the territory of Crimea. Also, since Ukraine was planning to join the European Union (EU), other countries such as the US and EU got involved in these conflicts with the idea of protecting Ukraine’s interests. Conflicts between Russia and Ukraine have been going on over the Crimean Peninsula, which is situated on the northern coast of the Black Sea. The peninsula is located just south of the Ukrainian mainland and west of the Russian region of Kuban. It is surrounded by two seas: the Black Sea and the smaller Sea of Azov to the east. It is a multi-ethnic region that was, until February 2014, administrated by Ukraine as the Autonomous Republic of Crimea with Sevastopol having its own administration within Ukraine but outside of the Autonomous Republic. Both of the mentioned territories are populated by an ethnic Russian majority and a minority of both Ukrainians and Crimean Tartars. Currently, the Crimean Peninsula is controlled by the Russian Federation as the Crimean Federal District, a status that is not recognized by the United Nations. This article will discuss influences of economic diplomacy and president’s authorities in crises, like the one that’s been occurring in Crimea. Economic diplomacy assumes the diplomatic official activities that are focused on increasing exports, attracting foreign investment and participating in work of the international economic organizations, i.e., the activities concentrated on the acknowledgement of economic interests of the country at the international level. Moscow State Institute of International Relations, established in 1944, is the best known Russian school of diplomacy, consisting of 6 different schools focused on international relations. On the other side, there are the presidents of Russia and Ukraine who have different authorities and powers. The president of the Russian Federation is the head of state, Supreme Commander-in-chief and holder of the highest office within the Russian Federation. However, he is not the head of the executive branch. The Government of Russia is the highest organ of executive power. The current president of Russia is Vladimir Putin. The president of Ukraine is the Ukrainian head of state. The president represents the nation in international relations, administers the foreign political activity of the state, conducts negotiations and concludes international treaties. Current acting President is the Chairman of the Ukrainian Parliament, Olksandr Turchybov, after the Ukrainian Parliament ousted Viktor Yanukovych from his office on February 21, 2014. Yanukovych still claims to be "the legitimate head of the Ukrainian state elected in a free vote by Ukrainian citizens.” Population of Ukraine is divided by two different regions, roughly by the Dnieper River. These parts have a very different history. In these regions, the people speak different languages (most of them). Many people in the eastern part would prefer for their country to be part of Russia, while western Ukraine wants full independence and to become part of EU. Western Ukraine has its origin in Kievan Rus. Soon after the Mongol invasion, part of this territory joined the Kingdom of Poland, and another part joined the Great Duchy of Lithuania. Later, Poland and Lithuania united in one state called Polish-Lithuanian Commonwealth. Modern Belorussia was also a part of the Commonwealth. However, the population of the present Western Ukraine never mingled with the rest of the population of the Commonwealth because of religious differences; Ukrainians were mostly Orthodox, while Poles and most "Lithuanians" were Catholic. The territory of Eastern Ukraine (Wild steppe) was settled much later by settlers from Russia and from the Commonwealth (Cossacs). Until the 18th century, this was a nomad territory controlled by various Tatar descendants of the Mongol state. As a result of 17th-century wars, it went to Russia, and the territory of the modern Ukraine was split along the Dnieper River, between Russia and the Commonwealth. The Polish-Lithuanian Commonwealth was destroyed by the end of the 18th century, jointly by Russia, Prussia and Austria. As a result, most of Ukraine "united" with the Russian empire, except a small part that remained in Austria. After WWI, the minority that united with Austria this part went to Poland and Romania, and in 1939-40 the Soviet Union invaded Poland and Romania and joined this remaining part forced the rest of Ukraine to join to Soviet Ukraine. So there was always a tension, within Ukraine: one part of the population feels "European" and another feels "Russian.” One historian noticed that the dividing line almost exactly coincides with the dividing line between steppe and forest geographical zones. The most radical pro-European part is exactly the one that was annexed by the Soviet Union in 1939-40. (Lviv, Ternopol regions). The most pro-Russian are eastern regions of Kharkiv, Donetsk, etc. A separate part is the Crimean peninsula, which was never historically a part of Ukraine. Its population was Tatar. It was invaded and annexed by Russia in the 18th century. In the 1940s, the Soviets expelled all Tatar population from Crimea. Only after the collapse of the Soviet Union were they permitted to return. Crimea was administratively joined with Ukraine only in the second half of 20th century. Most of the non-Tatar population is Russian.
The Impacts of CCCTB Introduction on the Allocation of the Corporate Tax Bases in the Czech Republic
Dr. Danuse Nerudova, Mendel University, Brno, Czech Republic
Dr. Veronika Solilova, Mendel University, Brno, Czech Republic
After 10 years of intensive work, the European Commission has published the draft directive on common system of corporate taxation. It can be considered as the most ambitious project in the history of structural tax harmonization in the area of direct taxation in the EU. CCCTB proposal represents unique system, for on one hand it represents unified rules for the construction of the tax base, on the other hand it does not breach the national sovereignty of EU Members States to apply independently the tax rate. The aim of the paper is to research the changes in the distribution of group tax bases of the Czech subsidiaries of EU parent companies after the implementation of the CCCTB system within EU28. During the research will be performed the comparative analysis of the current situation when the separate entity approach is applied with the situation when CCCTB system is applied – i.e. when applying group taxation and consolidation schemes, based on the empirical analysis of the data available from the Amadeus database. After 10 years of intensive work, the European Commission has published the draft directive on common system of corporate taxation. It can be considered as the most ambitious project in the history of structural tax harmonization in the area of direct taxation in the EU. CCCTB proposal represents unique system, for on one hand it represents unified rules for the construction of the tax base, on the other hand it does not breach the national sovereignty of EU Members States to apply independently the tax rate. It also meats the basic requirements which were formulated by EU Member states – it should contribute to the reduction of compliance costs of taxation, elimination of transfer pricing issues by enabling group taxation schemes and consolidation and therefore enabling cross border loss offsetting. Decrease in compliance costs of taxation as well as comparability of nominal tax rates (de facto being equal to effective ones) together with possibility of consolidation will have impact on the competitiveness of the European companies on the global market and therefore also on the economic growth of the European Union. It is also necessary to mention, that the implementation of the system could have some negative consequences. As the most important one can be considered the fact that it might open a space for tax evasion and tax fraud due to the fact that CCCTB would not replace the national tax system, but would co-exist together with it. Therefore the draft of the directive comprises strict rules for entering into the system and leaving the system so that the system could not be misused for tax evasions and tax speculations. Currently, enterprises belonging to the MNEs group are taxed as separate entities in the state of their residence without the possibility of the consolidation for tax purposes. The only Member State allowing full tax consolidation is Netherlands. However, under CCCTB system MNEs group will be able to create one group for taxation purposes and to consolidate its profits and losses. Therefore, the CCCTB proposal includes also the suggestion of mechanism of the sharing the group tax bases among the individual Member States. The consolidated tax base of the MNEs group will be shared among the Member States according the allocation formula, taking into account the location of assets, labor force and sales of the enterprise. I.e. even though the Member States will have a right to set the corporate income tax rate, they are going the impose the tax rate on the tax bases, which will be different from the current situation when separate entity approach is applied. Based on that, the Member States will also raise different tax revenues. The aim of the paper is to research the changes in the distribution of group tax bases of the Czech subsidiaries of EU parent companies after the implementation of the CCCTB system within EU28. During the research will be performed the comparative analysis of the current situation when the separate entity approach is applied with the situation when CCCTB system is applied – i.e. when applying group taxation and consolidation schemes, based on the empirical analysis of the data available from the Amadeus database. The paper presents the results of the research in the project GA CR No. 13-21683S “The quantification of the impact of the introduction of Common Consolidated Corporate Tax Base on the budget revenues in the Czech Republic”. At present, most EU Member States treats groups of companies as separate entities. The only country applying full consolidation represents Netherlands, which enables to consider groups of companies as one unit. Therefore companies are facing the problems of transfer pricing for as mentions Bakker (2009) under arm´s length principle, affiliated businesses should set transfer prices at levels that would have prevailed that the transaction occurred between unrelated parties. Moreover, as highlights Oestereicher (2000), companies treated as separate entities are completing financial accounts and exterminating the profit according the rules comprised in the taxation systems in each location. Separate entity approach is also connected with the problem of allocation of the taxable income. As mentioned by Jacobs (2002), functional and factual analysis needs to be performed in order to allocate the taxable income to a branch segregated from the entity. Solilova and Nerudova (2013) mention that the setting of the transfer price influences the taxable incomes of the company and therefore has impact on the tax revenues of EU Member States. In that connection Picciotto (1992) mentions that the tax authority can adjust the tax base of the company in situation when the transfer price significantly differs from the price which would be set on the open market. Application of the system, which is currently applied only in Netherlands, requires in case of the implementation on the EU level the introduction of the mechanism for the sharing of the group tax base among the individual Member States. As mentions Weiner and Mintz (2002) such a mechanism represents the allocation formula, usually based on factors as assets, labor, sales and others. Sorensen (2004) and Devereux (2004) indicated that the allocation mechanism can be considered as the system of source taxation. First scientific work, which has been focused on sharing mechanism, concretely on formulary apportionment, was done by Musgrave (1972), who pointed out that formulary apportionment could eliminate the problem with transfer pricing within multinational corporations. Miller (1984) mentions that the formula should reflect the elements measuring the processes involved in the earning of net income and that formula should be easy to administer. Later Gordon and Wilson (1986) examined how corporate taxation of multinational firms using formula apportionment affects the incentives faced by individual firms and individual states. Musgrave (1984) defined two basic views on the formula (with respect to the fact where the profit originates) – supply-based and supply-demand based view. McLure (1980) has proved that when a formula consists of the factors as property of the company, payroll and sales, corporate income tax transforms into a tax on property, payroll and sales. This has also been proved by Goolsbee, and Maydew (2000). Also Wellish (2000) shows, that when a labor is used as the factor, then the costs of labor are exceeding the local wage rate, which reduces the demand for labor in each state.
Investigating the Influence of Individual Level IT Security Climate on Salient IT Security User Beliefs
Dr. Janis A. Warner, Sam Houston State University, TX
There is a growing need to better understand what influences user behavior for the development of comprehensive IT security systems. This study integrates two prominent bodies of research:(a) the theory of planned behavior used to frame the factors influencing user behavior beginning with salient user beliefs, and (b) individual level climate perceptions used to frame organizational environment influences. Hypotheses regarding the relationships between the climate and beliefs are empirically tested. The intent of the research is to extend the theory of planned behavior and IT security literature by investigating environmental influences on user beliefs regarding an IT security artifact, such as anti-spyware. The results of the study provide evidence that there are significant positive relationships between an IT security climate at the individual level, also known as IT security psychological climate, and several identified salient beliefs. A discussion of the findings and their implications for theory and practice is provided. The global nature of business today makes information technology security increasingly important and complex as organizations strive to share their information and data assets more effectively with employees, partners and customers. The criticality of IT security is echoed by FBI director Robert Mueller who recently declared that “cyber security may well become our highest priority in the years to come” (FBI, 2012). Recognition of the role of the internal user and building a “human firewall” (Coe, 2003) are imperatives if organizations want to safeguard their information assets. Internal user practices (Wade, 2004) and organizational culture (Britt, 2005) point to internal user beliefs and behavior as foundational aspects in the protection of an organization’s information assets. Thus, there is a significant need to investigate IT security, internal user IT security beliefs and behavior. IT security research can significantly aid in focusing more attention on the social aspects of IT security. This study uses organizational climate as the external variable for investigating the socio-organizational perspective of IT security. Organizational climate refers to the shared perceptions of basic properties such as policies, practices and procedures among members of a target organization (Schneider & Reichers, 1983). Climate has been used to explicate individual, group and organizational behavior, as well as being a diagnostic tool for organizational improvement and change in applied settings (Parker et al., 2003). The theory of planned behavior (TPB) is discussed as the study’s framework for understanding the influence climate has on IT security beliefs via a specific IT security artifact. To deal with the amorphous nature of climate, Schneider (1975) argued that climate dimensions should have a strategic focus, i.e. be for something. This strategic focus, known as facet specific, has been investigated for facets such as customer service (Schneider, 1990) and safety (Zohar, 1980). Similarly, the facet specific concept is applied for this research such that the climate construct will be IT security specific. Individual level climate, also called psychological climate, has been related to a number of important individual-level outcomes in organizational behavior research including job satisfaction, organizational commitment, and employee performance (Parker et al., 2003). Focusing on climate research at the individual-level, i.e. psychological climate, lays the foundation for the IT security climate concept at aggregated levels such as group, department, or organization. Climate perceptions influence behavior-outcome expectancies (Zohar, 2003). Similarly, the theory of planned behavior, a widely applied framework in IT research (Venkatesh, Morris, Davis & Davis, 2003), posits that behavior is determined by beliefs about that target behavior via the behavior determinant construct behavioral intentions (BI). To use TPB as a model framework, a target behavior must be identified. Previous literature identifies anti-spyware as an IT security artifact that is less pervasively used even though spyware can be as big a threat as other malicious software programs such as viruses (Hu & Dinev, 2005). Thus, understanding the behavioral intentions toward use of anti-spyware is a desirable goal and acceptable proxy for investigating actual IT security behavior (Infinedo, 2012; Dinev & Hu, 2007). We argue that the TPB is incomplete in and of itself when used to understand IT security behavioral intentions and behavior because the TPB does not address the environmental aspect of influence and how individuals make sense of the environment. The proposed incorporation of the IT security psychological climate construct as an antecedent to behavioral/attitudinal, normative and control beliefs in the TPB framework provides insights into the socio-organizational IT security processes. There is a need to better understand human factors of IT security, and there is a paucity of socio-organizational perspectives in research focused on that understanding. Therefore, this research centers on the role of IT security psychological climate with regard to the attitudinal/behavioral, normative and control beliefs toward using the specific IT security artifact anti-spyware. Organizational climate represents the descriptions of the things that happen to employees in an organization focusing on behavior (Schneider, 2000). The description of the behavior identifies a pattern. For example, safety or service climates represent patterns of behavior that support safety or service in the focal organization. Climate is typically measured quantitatively using questionnaires (Denison, 1996). When the focus of a study is to quantitatively assess individual’s perceptions of their organizational context, the concept of organizational climate is considered appropriate (Bock, Zmud, Kim & Lee, 2005). A multidimensional IT security climate measurement was recently developed and will be used in this study (Warner, 2011). The dimensions of IT security psychological climate were identified as: 1) awareness, 2) perceived managerial influence, and 3) perceived peer influence. These dimensions, though related, provide perceptions of different aspects of IT security psychological climate and thus are necessary for sufficient content validity (Warner, 2011).
Financial or Operating Lease? Problems With Comparability of Financial Analysis Ratios
Dr. Hana Bohusova, Mendel University, Brno, Czech Republic
Dr. Patrik Svoboda, Mendel University, Brno, Czech Republic
The objective of financial statements is to provide information about the financial position, performance and changes in financial position of a company. The information should be useful to a wide range of users in making economic decisions. Financial statements should be understandable, relevant, reliable and comparable. The use of IFRS for financial statements preparation in more than 100 countries over the world, and the convergence of IFRS with US GAAP should increase the comparability of financial statements over the world. The comparability is decreased due to existence of some different treatments for reporting the same transaction allowed by IFRS and that some transactions could be structured by companies in different ways (lease, pension) to get the demanded effect by reporting company (on-balance sheet and off-balance sheet). The paper is concerned with an evaluation of possibilities of comparison of financial statements prepared under the current IFRS and US GAAP treatments for lease reporting. Financial Reporting as a result of application of accounting treatments should become a comprehensible source of information for users from different countries. It is supposed that the use of IFRS enhances the comparability of financial statements, improves corporate transparency and increases the quality of financial reporting (Daske, Hail, Leuz, Verdi, 2008). Companies which comply with the IFRS rules can achieve many benefits, e.g. they may reduce investor’s uncertainty and can thus reduce the costs of capital they can significantly improve the communication between business users and all their statements. More than 100 countries in the world have already adopted IFRS. On the other hand, there are US Generally Accepted Accounting Principles (US GAAP) which were the only reporting system, which was accepted by financial markets in the USA for a long time. The two most significant organizations in the field of financial reporting regulation setters in the world - The Financial Accounting Standard Board (FASB) and International Accounting Standards Board (IASB) begun significantly cooperate in the development of common principles based standards in 2002. In September 2002 was published an agreement Memorandum of Understanding (MoU) in American Norwalk. In this agreement FASB and IASB have committed to the approach of convergence of accounting standards (IAS/IFRS and US GAAP), so as to be acceptable by the world's capital markets. The process of convergence of US GAAP and IFRS has been realized through a series of sub-projects aimed at short-term or long-term period. The unification of accounting rules was expected no later than at the end of 2008. Defined target has not been fulfilled and the supposed completion date was significantly postponed. In 2012 the IASB and FASB published a joint progress report. The IASB and FASB remain committed to completing three significant convergence projects-financial instruments, revenue recognition and leases. Analyzing information using financial ratios allows comparisons with the past, with other companies in the industry, and with the market as a whole. The problem could arise in the comparison among companies and in the comparison with companies in other countries. The first case is connected with use of different treatment allowed by IFRS for reporting the same transaction (Nobes, 2006). There could be used different ways of measurement of long term assets (historical cost and fair value), different ways of measurement of inventories. The second case is connected with transaction that could be structured by companies in different ways (lease, pension) to get the demanded effect by reporting company (on-balance sheet and off-balance sheet). While there are several types of off-balance sheet financing that firms could employ, pensions and operating leases are the most commonly used off-balance sheet activities. For this reason, leases are also part of the convergence projects. The main aim of Leases Project is the removal of the existing possibilities within IFRS and U.S. GAAP structuring of lease agreements by lessee to get the desired effect on the financial statements and indicators of the financial situation. The paper is concerned with comparison of two different possibilities of the lease agreement structuring under the current treatment and the evaluation of the impact on financial statements and financial analysis ratios. The main aim of this paper is quantification of the impact of reporting an operating lease as a balance sheet item (leased asset and lease liability) on the information provided by financial statements and indicators of financial analysis. The modified method of constructive capitalization is applied for quantification of leased assets and lease liabilities. The financial statements of non-financial companies listed on BCPP data are used for the research. These companies prepare financial statements in accordance with IFRS. Only the companies whose notes to financial statements comply with all operating leases reporting requirements are subject of the research. According to IAS 17.56, all companies must disclose their future minimum (operating) lease payments (MLP) for the following years in a structure: first year, year two to five and the years after the fifth. Only limited amount of companies disclose information in the way required. Information on operating leases is utilized for off-balance sheet operating lease capitalization for the purpose of financial statements comparison and financial analysis of key financial ratios. The accounting data from the financial year-end closing dates in 2013 are utilized. The following procedure is used for the capitalization of operating leases on the current financial statements: The value of capitalized operating leases is added to a book value of assets and to long-term debt. The estimation based on the present value (using the 5% effective interest rate) of minimum lease payment (PVMLP) is used for the lease liability calculation in this research. The value of leased assets (LA) is equal to the lease liability (at the lease inception). According to IAS 17.56, the companies are obliged to disclose the future minimum lease payment for each of the following period: Not later than one year, Later than one year and not later than five years, Later than five years. It is necessary to estimate annual lease payments when the single figure is disclosed for lease payments occurring between two and five year. According to study of Bennett, Bradbury (2003), it is assumed that all lease payment equal over the lease term and the annual lease payment are estimated by dividing the minimum lease payment between year two and five by four. Implicit lease interest expense is removed from operating profit and considered as financial cost. The calculation is based on value of the operating lease payment multiplied by the interest rate of secured debt (5%). The remaining rental expense is considered as depreciation of leased assets.
IT Governance: A Multi-level Analysis toward a Unified Perspective
Dr. Humam Elagha, Royal University for Women, Kingdom of Bahrain
This paper examines the previous and current research in IT Governance to provide a basis for further research. A conceptual IT Governance framework that builds on four disciplines of IT Governance is proposed. The framework builds on the integration between the structural and processes perspectives of IT Governance Domains, IT Governance Maturity, IT Governance Mechanisms, and IT Governance Performance. Moreover, the paper empirically examines the relationship between the four disciplines of IT Governance. This study found robust empirical evidence that the maturity of the IT Governance Domains greatly enhances the level of IT Governance Maturity and the existence of IT Governance Mechanisms greatly enhances the overall effectiveness of IT Governance. This paper shows that current IT Governance research represents a strong, albeit not completely inclusive, combination of the four disciplines of literature. The paper concludes that even with the consideration of contemporary structures, academicians and practitioners similarly continue to explore the concept of IT governance in an attempt to find appropriate mechanisms to govern corporate IT decisions. The concept IT Governance emerged in mid-nineties and first used by Henderson & Venkatraman to describe the complex array of inter firm relationships involved in achieving strategic alignment between business and IT (Henderson & Venkatraman, 1993). IT governance is the structure of relationships, processes and mechanisms used to develop, direct and control IT strategy and resources so as to best achieve the goals and objectives of an enterprise. It is a set of processes aimed at adding value to an organization while balancing the risk and return aspects associated with IT investments. IT governance is ultimately the responsibility of the board of directors and executive management. In a broader sense, IT governance encompasses developing the IT strategic plan, assessing the nature and organizational impact of new technologies, developing the IT skill base, aligning IT direction and resources, safeguarding the interests of internal-external IT stakeholders as well as taking into account the quality of relationships between stakeholders (IT Governance Institute, 2007a; Korac-Kakabadse & Kakabadse, 2001; Kordel, 2004). IT governance today concerns how the IT organization is managed and structured and provides mechanisms that enable the development of integrated business and IT plans, allocation of responsibilities within the IT organization, and prioritization of IT initiatives (De Haes & Van Grembergen, 2005a; Larsen, Pedersen, & Andersen, 2006; Wiell & Ross, 2005). The main purpose of this research is to propose a conceptual framework for IT Governance that builds on four disciplines of IT Governance. In this section, the researcher tries to scan the literature related to the variables of the study. Four main research disciplines are covered in this section: IT Governance Domains, IT Governance Maturity, IT Governance Mechanisms, and IT Governance Performance. IT Governance Institute argues that IT Governance focuses on the following IT Governance Domains (IT Governance Institute, 2007a): (1) IT Strategic alignment focuses on ensuring the linkage of business and IT plans; defining, maintaining and validating the IT value proposition; and aligning IT operations with enterprise operations, (2) Value delivery is about executing the value proposition throughout the delivery cycle, ensuring that IT delivers the promised benefits against the strategy, concentrating on optimizing costs and proving the intrinsic value of IT, (3) Resource management is about the optimal investment in, and the proper management of, critical IT resources: applications, information, infrastructure and people. Key issues relate to the optimization of knowledge and infrastructure, (4) Risk management requires risk awareness by senior corporate officers, a clear understanding of the enterprise’s appetite for risk, understanding of compliance requirements, transparency about the significant risks to the enterprise and embedding of risk management responsibilities into the organization, and (5) Performance measurement tracks and monitors strategy implementation, project completion, resource usage, process performance and service delivery, using, for example, balanced scorecards that translate strategy into action to achieve goals measurable beyond conventional accounting. During the last decade, a variety of IT governance frameworks and different assessment methods for evaluating and assessing IT Governance Maturity have emerged. Control Objectives for Information and Related Technology (COBIT) is an IT governance framework and supporting toolset that allows managers to bridge the gap between control requirements, technical issues and business risks. COBIT enables clear policy development and good practice for IT control throughout organizations. It was first issued by the IT Governance Institute, ITGI, in 1998 and has been constantly evolving ever since (IT Governance Institute, 2007a). The IT Organization Modeling and Assessment Tool (ITOMAT) is an IT Governance Maturity assessment tool based on the COBIT framework. The ITOMAT is proposed by (Simonsson & Johnson, 2008) to overcome operationalization and subjectivity weaknesses in the Control Objectives for Information and related Technology (COBIT) framework. Dahlberg & Kivijärvi presented an Integrated IT Governance Framework and introduced related IT governance maturity assessment instrument (Dahlberg & Kivijärvi, 2006). The framework can be used to develop an IT governance maturity assessment instrument. The framework builds on the integration between the structural and processes perspectives of IT governance, business-IT alignment, and senior executives’ needs. IT Governance Performance is the quality of the services that the IT organization delivers, as seen from a business point of view. During the last decade, a variety of IT governance performance frameworks and different assessment methods for evaluating and assessing IT Governance Performance have emerged. In this section some frameworks are considered and evaluated. MIT researchers Weill & Ross conducted a large set of case studies on IT governance performance (Wiell & Ross, 2005) . They identified two types of top performers: top IT governance performers and top financial performers to describe the impact of successful IT governance arrangements. Their research methodology and the findings from more than 250 organizations were published in a book that is the most widely cited work in the field of IT governance today (Weill & Ross, 2004). Weill & Ross defined IT governance performance (top IT governance performers) as the effectiveness of IT governance in delivering four objectives weighted by their importance and outcome to the enterprise: (1) Cost-effective use of IT, (2) Effective use of IT for asset utilization, (3) Effective use of IT for growth, and (4) Effective use of IT for business flexibility. IT Governance Mechanisms indicate the design and implementation of a coordinated set of governance mechanisms for ensuring effectiveness of IT governance. According to (Wiell & Ross, 2005) enterprises generally design three kinds of governance mechanisms: (1) decision-making structures, (2) alignment processes and (3) formal communications. With respect to decision-making structures, the most visible IT governance mechanisms are the organizational committees and roles that locate decision-making responsibilities according to intended archetypes. Different archetypes rely on different decision-making structures. Anarchies (which are rarely used — or at least rarely admitted to!) require no decision-making structures at all. Feudal arrangements rely on local decision-making structures. But monarchy, federal or duopoly arrangements demand decision- making structures with the representation and authority to produce enterprise wide synergies. Alignment processes are management techniques for securing widespread and effective involvement in governance decisions and their implementation. For example, the IT investment proposal process delineates steps for defining, reviewing and prioritizing IT projects, in determining which projects will be funded. Architecture exception processes provide a formal assessment of the costs and value of project implementations that veer from company standards. Service-level agreements and charge backs help IT units clarify costs for IT services and instigate discussion of the kinds of services the business requires. Finally, formal tracking of business value from IT forces firms to determine the payback on completed projects, which can help firms focus their attention on generating intended benefits (Wiell & Ross, 2005).
Stock Market Development and Economic Growth in Namibia
Dr. Esau Kaakunga, University of Namibia, Windhoek
The purpose of this study was to investigate the influence of the stock market development on economic growth in Namibia using modern time series econometric techniques that of co- integration and error correction modeling. The study used market capitalization, local share index and total value of all shares traded as indicators of stock market development and real gross domestic product as a proxy for economic growth. The results of the study have shown that real gross domestic product is co-integrated with stock market development. The short-run results indicate that local share index and total values of local shares are negative and statistically insignificant correlated with economic growth. The study has found market capitalization as an important stock market performance indicator that has driven economic activities in Namibia during the period under review. Hence, policies that will promote the development of the capital market will also contribute to economic growth of the Namibian economy. The study is suggesting the removal of impediments to stock market development which include tax, legal, and regulatory barriers; and the employment of policies that would increase the productivity and efficiency of firms as well encourage them to access capital on the stock market and also enhance the capacity of the Namibian Stock Exchange, restore the confidence of stock market participants and safeguard the interest of shareholders. The stock market plays an important main role as an economic institution which improves the efficiency in capital formation and allocation. It enables both corporations and the government to raise long-term capital which enables them to finance new projects and expand other operations (Olweny and Kimani, 2011). Stock market has been associated with economic growth through its role as sources for new capital, whereas economic growth may be catalyst for stock market growth. Senbet and Otchere, (2008) have noted that the principal channel for the linkage between stock market development and economic performance is liquidity provision of the market. Yartey and Adjasi (2007) found out that stock markets contribute to financing of corporate investments and hence growth of listed firms in Africa as they are required to keep best practices. This indicates that corporate financing channel is another mechanism for the stock markets in influence aggregate economic performance (Senbet and Otchere, 2008). Stock markets are a vital component for economic development as they provide listed companies with a platform to raise long-term capital and also provide investors with a forum for investing their surplus funds to invest them in additional financial instruments that better matches their liquidity preferences and risk appetite ((Olweny and Kimani, 2011). Over the past decade, the world stock markets boomed, with a large amount accounted for by emerging markets. This boom triggered recent theoretical researchers such as Bencivenga, Smith and Starr (1996); Levine and Zervos (1998); Yarter & Adjasi (2007) and Khanna (2009) to base their studies on the effect of stock markets in the development of the economy. An economy where the stock market is at peak is considered to be competitive and developed. This is due to the fact that stock market development is often considered as the primary indicator of a country`s economic strength and development. An efficient stock market is expected to lower cost of equity capital for firms and enables individuals to more efficiently price and hedge risk. According to Yartey (2008), stock markets can also attract foreign portfolio capital and increase domestic resource mobilization, expanding the resources available for investment in developing countries. Similar to other emerging markets in Africa, Asia and Latin America, the Namibian Stock Exchange (NSX) is amongst the largest stock exchanges in Africa after Johannesburg stock exchange (JSE) in term of its market capitalization (NSX, 2011). Although many stock exchanges around world are profit making including those of the JSE, NSX is still an association not for profit making. The first Namibian Stock Exchange (NSX) was founded in Luderitz in Southern Namibia at the start of the 20th century. Its establishment was influenced by the diamond rush which brought hundreds of prospectors to the area. Within few years the rush was over and the exchange was closed. At independence in 1990, there was a need to establish a second NSX to enable the domestic economy to become independent from South Africa and to encourage investment for private sector expansion, amongst others. Government gave full moral and legislative support, while funding came from 36 leading Namibian business, who became founding members by donating N$10,000 each as start-up capital for the first three years of the exchange. On 16th October 1992, the exchange was launched after amending the Stock Exchange Act of South Africa by the NSX proponents to adapt it to the local Namibian market (Matome, 1998). Economic activities on the NSX have been mixed since late 1990s. These can be understood from key indicators like market capitalization and local shares index. In Namibia, the overall value or size of domestic companies as measured by market capitalization varied overtime from 1997 to 2002. It can be seen from figure 1, that the growth of market capitalization (MCAPGR) stood at approximately 51.7 percent in 1997 and around −4.3 percent in 2002. However, between 2003 and 2013, market capitalization grew at annual growth rate of 24.4 percent. Similarly, local shares index growth rate (LINDEXGR) varied sharply between 1997 and 2002. The lowest growth rates of −41.4 percent and 35.9 percent were recorded in 1998 and 2001, respectively. The growth in economic activities as measured by the growth rate of gross domestic product (RGDPG) did not divert significantly from the developments in the stock market as depicted in figure 1.. It is essential to indicate that when stock market development indicators recorded negative growth rates, this growth was reflected to a certain extent in the growth of real gross domestic product. It is also worth to note that during the period 2004-2013, there has been a positive growth in market capitalization until it reached a high growth rate of 60.3 percent in 2013, after a significant decline recorded of −44.7 percent in 2000. The total number of companies listed on the NSX stood at 38 companies, of which 12 were local companies. This number has increased slightly to 41 companies in 1999, which is the highest number of companies listed on the Namibian Stock Exchange between 1997 and 2013. Presently, there are 8 local companies out of 38 listed companies on NSX and hence the dominance is dual listed companies. The reduction in the number of listed companies is attributed by takeovers, transfers to other exchanges and liquidation. Given the above, the main objective of the study is to assess the impact of the stock market development on economic growth in Namibia.
National Problems, Global Solutions: Technology, Information, and Safeguards
Dr. Kamlesh Mehta, National University, CA
Dr. Vivek Shah, Texas State University, TX
The era of digital and cyber information have dominated the 21st Century. Due to the increase in the use of cyber and internet technologies, the rise in data mining and social network, and the increased need for cyber and digital security measures, consumers, businesses, government, and citizens must ask three fundamental questions: (1) Do the benefits of living in an era of data mining and social networking outweigh the loss of personal privacy? (2) Should outsiders or next-to-kin be able to know more about a person than the person perhaps knows about one self? (3) What safeguard strategies are available to companies to ensure cyber and digital security? The purpose of this research paper is to examine how data mining, social network analysis, and data collection techniques have changed the face of society, and our perceptions of what Personal Privacy actually means, and to review the strategies available to companies to safeguard cyber and digital security. The internet has become a veritable treasure trove of information about people, societies, and global trends. The advent of the internet opened doors to new ideas and information sharing, however, this free information frontier may come at a price to personal privacy. In the “Information Age” people have no qualms about releasing the most intimate details of their personal lives out into “Cyberspace” for the entire world to see or do they? In most instances, people knowingly, or under false sense of privacy, or unknowingly release the most intimate details of the personal lives. In George Orwell’s book “1984” (1950), an all seeing all controlling government ominously named “Big Brother” utilized a band of “Thought Police” to tell its citizens what to think, what to do, and how to act. Free societies, as a whole, fear governing bodies that exercise too much control and have access to every bit of information about its citizens. Businesses and governments are using ever more sophisticated Customer Relationship Management (CRM) tools and new intelligence gathering techniques such as data mining and social networking analysis to define their customer base or constituencies. It is easy to paint an ominous picture of data mining and other CRM techniques, however, there are also many good uses for data mining such as tracking the spread of infectious diseases, building stronger social networks, enhancing global collaboration, and helping businesses reduce costs, define their target markets, and customer likes and dislikes. In light of information explosion, we must ask three fundamental questions: (1) Do the benefits of living in an era of data mining and social networking outweigh the loss of personal privacy? (2) Should outsiders or next to kin be able to know more about a person than the person perhaps knows about one self? (3) What safeguard strategies are available to companies to ensure cyber and digital security? Let's analyze the arguments for and against data mining and social network analysis to get a better picture of what we are dealing with and how invasive and pervasive data collection techniques really are. We will examine how they have changed the face of society, our perceptions of what Personal Privacy actually means, and the safeguard measures available to companies to ensure cyber and digital security. Data Mining – The Players: Historically, Data mining has been around under the terminology known as information technology processing. In 1990s, a set of fairly established practices was introduced as an information technology for processing large amounts of data for the purposes of gleaning useful and meaningful facts and statistics about a given population (Piatetsky-Shapiro, 2012). Thus, Data mining is the new buzzword for established practices for information technology processing. However, Data mining is a relatively new term which primarily deals with statistics. Essentially, Data mining uses algorithms to churn through large databases to provide information that can be used by business, science, government, and the public in order to make all kinds of decisions, including those that affect citizens' personal lives. The government agencies in the United States have been Data mining for a long time. The US National Security Agency (NSA) has reportedly tapped into undersea telecoms cables for decades (Brooks and Brajak, 2013). Several decades ago, the general public learned of, then unknown, the Data mining practices used by the Federal Bureau of Investigation (FBI) under the leadership of Herbert Hoover. In most recent years, the most publicized Data mining effort, and most concerning program, was the Defense Advanced Research Projects Agency’s (DARPA) Total Information Awareness (TIA) program founded shortly after the 9/11 Terrorist attacks. This system was specifically designed to create and sift through a database of the records of every citizen’s records including medical, veterinary, work performance reviews, and financial documents to name a few. This effort brought public scrutiny and ire. The project was disbanded by Congress in 2003 but was probably later moved to the National Security Agency and renamed TopSail (Kelly, 2006). At times, the boundaries of Data mining and spying are blurry. In a most recent case of unauthorized Data mining, also known as spying, it was revealed that the US National Security Agency (NSA), with the help of US companies such as Google and FaceBook in Brazil, intercepted the communications of the Brazilian President and the government, hacked into an oil company, and spied on its citizens. As a result, the President of Brazil, Dilma Ruseff, ordered a series of measures aimed to establish an online and Internet system in Brazil that is independent of US companies and one that is more secure and only uses the Brazilian local companies (Brooks and Bajak, 2013). Even for the governments around the world, this case of government data mining emphasizes the importance of privacy and safeguard in the Internet and Cyber world. While the government agencies have been engaged in Data mining for a while, it was much later that the availability of the treasure trove of public information was noticed by entrepreneurs, who entered the Data mining realm as brokers of information. These private entrepreneurs, as early entrants to the Data Broker business, found ways to amass and package large amounts of public and private information and sell it to corporations, law enforcement agencies, government, and other entities. Moreover, there are two main distinctions between the private entrepreneurial data mining and the government data mining: the government agencies were not subjected to public scrutiny when amassing large amounts of personal information. As a result, no one raised the flag. the Data Brokers were not subjected to the oversight laws and regulations as were the government agencies.
The Glass Ceiling and Women in Management in the Middle East: Myth or Reality?
Dr. Evangelia (Lia) Marinakou, Royal University for Women, Kingdom of Bahrain
Although globalization and equal employment have created opportunities for female managers, they are still underrepresented in the corporate ladder. Gender and gender role stereotypes are persistent in organizations that operate in the Middle East, challenging women’s employment and showing evidence of the glass ceiling in management. This paper explores the position of women in management in the Kingdom of Bahrain, as well as the barriers they face in climbing the career ladder. The findings from the semi-structured interviews with 15 female managers suggest that they identify long hours work, stereotypical behavior and gender discrimination as the prevailing factors to career growth. In addition, society and culture have also been widely identified, including family commitment and balancing work with family. The paper proposes that women who want to lead a successful professional life have found their way to break through the barriers of the invisible glass ceiling by commitment, family support and education. In addition, companies are gradually learning how to create cultures in which expectations and professionalism are not necessarily gender-linked. Although, the last decade in the Middle East has been a steady economic expansion (Saddi, Sabbagh, Shediac & Jamjoum, 2012), the Arab Spring has added pressure to unemployment rates creating many challenges such as low female labor force participation rates, low levels of private sector development, weak public and corporate governance, limited competition, pervasive corruption and bloated public sectors (Tlaiss & Kauser, 2011). Within this context, there have been many changes for Arab women, as women are now entering the workforce and are rising to managerial positions. The percentage of Bahraini women working increased from 4.9% at 1971 to 33.5% to 2010 (Supreme Council for Women, 2013). The female representation in the Bahraini labor force is estimated at 29.8 per cent, much less than the global estimate of 51.7 per cent, but better than of the Middle East average which is estimated as 25.4 per cent (ILO, 2010). Nevertheless, women are mainly found at lower and middle management more than senior management levels (Omair, 2008; Metcalfe, 2008). Although globalization has contributed to an increase in women’s participation in management, the rate of women’s labor market participation in the Middle East is still the lowest in the world (Metcalfe, 2008). Nevertheless, the International Labor Organization (ILO) (2010) states that the participation of women in the labor market is on the rise. As per the World Economic Forum (2012) Global Gender Gap Report progress has been made to increase women’s education, nevertheless, only 33% per cent of women join the labor force in the region. There is strong evidence of gendered occupational segregation as women in the Middle East are mainly employed in health, education and social care (Metcalfe, 2008). Research suggests that further attention should be given to women’s values, career aspiration, leadership development and entrepreneurship as more and more women are joining the labor force (Metcalfe, 2008; Omair, 2008, 2010). Most of the research on women in management in the region remains anecdotal, normative and mainly conceptual (Afiouni, Ruel & Schuler, 2013). There is scarce information available regarding different aspects of human resources management and the position of women in management in the region (Metcalfe, 2008; Budhar & Mellahi, 2007), which has increased the interest of scholars in relevant research. The main topics of interest revolve around culture (such as the Islam and the patriarchal norm structures), (i.e. Burke & El-Kot, 2011), globalization (i.e. Harrison & Michailova, 2012), gender equality and diversity (i.e. Syed, Burke & Acar, 2010) affect women at work. Others study women’s career patterns and success (i.e. Omair, 2010) and work-life balance (i.e. Burke & El-Kot, 2011). A main concern for studying women in management in the Middle East is to understand what shapes women’s lives in the region, since women are still underrepresented in management in the Middle East. This underrepresentation of women at senior management has been attributed to what has been described as the “glass ceiling”. Knutson & Schmidgall (1999, p. 64) define the glass ceiling as the “invisible, generally artificial barriers that prevent qualified individuals – in this case, women – from advancing within their organization and reaching their full potential”. The glass ceiling may differ between countries and organizations, however the way it is managed determines the success at the workplace. There is very limited knowledge available on the experiences of women managers in organizations in the Middle East, and there is paucity of studies (Tlaiss & Kauser, 2011; Metcalfe, 2008). This papers aims at presenting the barriers women face in management in the Kingdom of Bahrain, and how to overcome them, as well as at providing an understanding of how female talent may be retained in the workforce and finally at proposing ways in which companies can align their human resources practices to their business strategies to overcome the barriers women face in management in the Kingdom of Bahrain. The interest in the study of women in management has been triggered by the increasing role women are taking in management. However, as already discussed women are underrepresented in senior managerial positions, not only in the Kingdom of Bahrain, but also in other countries. Research shows that this trend is common to many countries and different cultures (Al-Manasra, 2013; Omair, 2008). Management has been considered as a career mainly for men (Powell & Graves, 2003), and women managers are dealing with blocked mobility, discrimination and stereotypes. The barriers that women face in management and the difficulties women face in advancing their career have been described as the “glass ceiling” phenomenon (Marinakou, 2011; Eagly & Carli, 2007; Man, Mok, Dimovski & Skerlavaj, 2009). Therefore, the career for women is a complex journey, but as the concept supports, it is not necessary to view these obstacles as discouraging. When referring to barriers women face in reaching high corporate positions, most research papers touch on society’s role in the matter. It seems to be that the largest barrier is society itself and its norms, and that is seen in various research papers written in different points of time over the past few decades. Societal norms including marriage, child-bearing, and certain career expectations are all believed to limit a woman’s ability to progress as a manager and move to higher positions. For example, Metcalfe (2008) explored the relationship between women, management and globalization in the Middle East and has shown that women face social and organizational barriers in the labor market. Some argue that “women’s legal status and social positions are worse in Muslim countries, such as Bahrain, than anywhere else (Moghadam, 2003, p. 3). Women are mainly perceived as wives and mothers, demonstrating gender segregation, hence women must marry and reproduce to earn status in the society. Moghadam (2003) suggests that Muslim societies are characterized by high fertility and rapid rates of population growth. Hence, women are different, should not be employed which strengthens social barriers to women’s achievement. Islam is not more or less patriarchal than other major religions, however, “the gender configurations that draw heavily from religion and cultural norms govern women’s work, family status and other aspects of their lives” (Moghadam, 2003, p. 5). At the same time, women in the Middle East are stratified by class, ethnicity, education and age. There are those who do not need to work and those who do to contribute to the family income. On the one hand, Metcalfe (2008, p. 89) suggests that many private companies are reluctant to employ “women partly due to social norms and partly due to additional costs that may be incurred for maternity provision”. On the other hand, women could fulfil both their professional and marital roles with the help of domestic labor or the extended family network (Al-Manasra, 2013). Similarly, gender may be considered a source of social distinction, as the legal system, educational system, and labor market are sites of reproduction of gender inequality (Metcalfe, 2008). Nevertheless, education may increase women’s aspirations for higher income and better standards of living, and weaken the barriers of traditions helping more women to join the labor force (Al-Manasra, 2013; Omair, 2010). Social changes have contributed to the reduction of sex segregation and have helped women achieve economic independence (Moghadam, 2003). Hence, as Omair (2008, p. 107) claims women in the region “can no longer be described as scared, inferior, domestic women who hardly leave their houses”.
Copyright 2000-2015. All rights reserved