The Business Review, Cambridge
Vol. 16 * Number 2 * December 2010
The Library of Congress, Washington, DC * ISSN 1553 - 5827
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind review process
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Business Review, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Business Review, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1553-5827 issued by the Library of Congress, Washington, DC. No Manuscript Will Be Accepted Without the Required Format. All Manuscripts Should Be Professionally Proofread Before the Submission. You can use www.editavenue.com for professional proofreading / editing etc...The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue.
The Business Review, Cambridge is published two times a year, December and Summer. The e-mail: firstname.lastname@example.org; Website: BRC Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via our e-mail address.. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright 2000-2017. All rights reserve
Who Governs the Internet? International Legal Aspects of IT Governance
Dr. Emilio Collar, Jr., Western Connecticut State University, Danbury, CT
Dr. Roy J. Girasa, Pace University, Pleasantville, NY
The Internet was created in the late 1960s as a result of the need for the United States military establishment to communicate with each other through a network that would be highly robust and survivable in the event of attack. In 1981 there were fewer than 300 computers linked to the Internet. As of 2008, there are some 1.5 billion users of the Internet. Each entity seeking to connect to the Internet is assigned one or more unique numeric addresses known as IP numbers or addresses. The Domain Name System (DNS) controls the way each component identifies and communicates with one another. Previously, the DNS was controlled by the U.S. having originated therein. Inasmuch as the Internet is utilized worldwide, there has been a call for the internationalization of the Internet rather than having the U.S. dominate it. Steps were taken in 1998 in the U.S. to create Internet Corporation for Assigned Names and Numbers (ICANN) which was to be independent of U.S. government control. ICANN has for the most part been independent but, nevertheless, steps were taken under the auspices of the United Nations to have control transferred from the U.S. to U.N. supervision. It instituted the World Summit on the Information Technology which took place in 2003 in Geneva, Switzerland and in 2005 in Tunis, Tunisia. A result of the summits was the creation of the Internet Governance Forum. The European Union has also engaged in efforts to transfer the DNS to one wherein the many international actors would play a role. Notwithstanding efforts to transfer ICANN responsibilities to other world bodies, ICANN has instituted reforms which have, in effect, taken cognizance of the complaints, including permitting scripts other than Western European script for Internet use. Its Board of Directors and other major positions are now held by persons form diverse parts of the globe. It appears that U.N. and European efforts will not cause ICANN to become extinct but has caused it to become much more responsive to globe needs. The paper discusses the creation and history of the Internet, ICANN, the World Summits and the future of Internet governance. Internet has transformed communication to a degree unimaginable only two decades ago. The number of users of the Internet is difficult to gage. In 1981, there were fewer than 300 computers linked to the Internet, and by 1989, the number stood at fewer than 90,000 computers. By 1993, over 1,000,000 computers were linked. As of 2008, there appears to be some 1.463 billion users of the Internet ranging from about 579 million users in Asia (15.3% of the population), 385 million users in Europe (48.1% of the population), over 248 million in North America (73.6% of the population), almost 42 million in the Middle East (21.3% of the population), and over 51 million in Africa (5.3% of the population) (Internet World Stats, 2010). The Internet was created in the U.S. for military purposes to enable military personnel to communicate with each other in a robust and safe environment throughout the many areas of the globe where there was a U.S. military presence. As a result, the U.S. government exercised control over it and in subsequent decades gradually lessened its influence. The Internet having become a worldwide phenomenon, a call for its internationalization and removal from U.S. dominance has taken place. This paper will explore the creation and history of the Internet, a discussion of the current system, and the proposed and subsequent changes that have de facto made the Internet largely independent of U.S. government influence. The Internet is a giant network that interconnects innumerable smaller groups of linked computer networks. It is, in essence, a network of networks wherein each network links a group of computers. Networks may be “closed” networks, i.e., not linked to other computers or networks connected to other networks, which are in turn connected to other networks. The linkage of networks permits each computer in any network to communicate with computers on any other network in the system. This global Web of linked networks and computers is referred to as the Internet (ACLU v. Reno, 1996). Prior to the Internet in the 1950s and early 1960s, most communication networks were limited allowing communications between the stations on the network. These networks had limited use connections between them. One networking method had a central mainframe that was connected to other terminals by use of leased lines. In 1962, J.C.R. Licklider, both in a paper written by him and shortly thereafter as head of the United States Department of Defense’ Advanced Research Projects Agency (DARPA), explored the possibility of connecting separate physical networks to form one logical network. Successor scientists at the RAND Corporation and at MIT developed and implemented packet switching in order to permit entire network of linked networks to survive a nuclear attack. In 1969, the first ARPANET link was established which linked a number of research institutions in the Western U.S. From the once secret government linkage, the Internet was developed initially for noncommercial purposes and then later to include the multitude of usages which we have today.
Measuring the Soft Side of the Achievement of Assurance-of-Learning Objectives
Dr. Nancy Haskell, Laval University, Quebec, Canada
Dr. Donald Beliveau, Laval University, Quebec, Canada
The International business school accreditation organisation AACSB evaluates business schools seeking accreditation or renewal of accreditation on, among other criteria, Assurance of Learning (AOL) standards. That is, it encourages business schools to define and measure students’ level of achievement of learning objectives for their program of study and then assesses the school’s success in doing so. Easier said than done, the assessment of learning is a subject of lengthy discussions and committee work in numerous AACSB-accredited business schools as well as in those preparing their candidature for accreditation. The present research examines the soft side of measurement (i.e., indirect measurement) by focusing on students’ perceptions of achievement over time during participation in a complex, interactive business simulation in international market entry and development. Results indicate that students clearly perceive improvement in their business skills as defined by program objectives. Furthermore, this achievement is progressive during the simulation experience. Concern for the effectiveness of education in general and of management education in particular is the subject of much debate in the face of the imperatives of global competitiveness, economic difficulties, severe budget constraints, and the shrinking population of potential candidates due to demographic shifts. The business community requires competent business graduates with the appropriate knowledge and skills to rapidly contribute to the success of the organizations that hire them, and these organizations have increased their scrutiny of business schools (1) (Michlitsch et Sidle, 2002). How can business schools ensure that their graduates are among the best candidates for the job? Achieving and retaining accreditation is an increasingly popular response. In this sense, 593 institutions are accredited by the Association to Advance Collegiate Schools of Business (AACSB) (2). Almost as many other institutions are members of the association for purposes of observation and eventual application for accreditation or as partner members (corporate, non profit, and public sector members) (AACSB, 2010). The AACSB is a leader in the improvement of the quality of management education by the accreditation and the periodic renewal of accreditation of its members. Standards for accreditation evolve in function of the business environment, with consequent pressure on accredited business schools to upgrade their programs, their methods of teaching, and the manner of assessing students’ learning. The standards specify that the development of learning goals and related learning objectives for business degree programs be developed by faculty with a scrutiny of the needs and the challenges of the business community by means of input from a variety of stakeholders (for example, alumni, students, and employers). In practice, advisory boards are the usual sources of input since they represent various stakeholder groups. (AACSB, 2007) Pertinent to this research, original standards which required the measurement of the attainment of learning goals and objectives, usually related to a set of degree-program courses (Thompson, 2004), were modified in 2003 to require the assessment of the achievement by students of overall degree-program learning goals and objectives (Standards 15, 16, 18, 19, and 21), i.e., focusing on the knowledge and skills that graduates of a specific degree program should possess (Pringle and Michel, 2007); specific course objectives are no longer within the purview of these AACSB standards. Assessment is seen as an exercise in educational improvement by using feedback from evaluation to improve course development (Angelo and Cross,1993) and an effective manner to compare business schools (Fernandes, 2006). In addition, pre-2003 assessment tools were often indirect (for example, surveys of employers and alumni, exit interviews of graduating students asking how well students had attained program objectives or how well they perform on the job, and job placement rates). Under the new standards, accredited schools must use direct measures of student’s achievement of learning goals and objectives (i.e., they must demonstrate their knowledge and skills, for example, in written deliverables such as examinations, individual reports, or individual case analyses, or in behavioural deliverables such as role-playing, competitions, and computer simulations). However, Pringle and Michel (2007) suggest that indirect methods remain essential to true assessment. Their 2007 study of assessment practices in AACSB-accredited business schools revealed the trial-and-error nature of assessment efforts and the heavy demands on business schools in attempting to develop such direct measures. In their study, they found a continuing use of attitudinal measures by business schools and conclude that although these indirect measures “cannot be used to show that students are meeting learning objectives, these are, in fact, methods of assessment, albeit indirect ones” (p.207).
User Modeling in Adaptive Web Design
Dr. Qiyang Chen, Montclair State University, Upper Montclair, NJ
An adaptive Web site accommodates a diverse set of users with a wide variety of knowledge background and interests. Recently some adaptive Web environments have become available. The adaptation can range from a simple automatic selection between different versions of pages to the completely dynamic customization and generation of all pages with adapted hypertext links and contents. This paper describes several issues related to adaptive Web site design, especially focuses on adaptation based on implicit user modeling process. They include issues in construction of user model and creating adaptive page contents and links. It also proposes the guidelines of using current Web-technology for user modeling and adaptation. As Web sites are becoming increasingly popular as information resources. The organization of hypertext links and contents offer users a great deal of navigational freedom. However, this freedom makes it impossible for both designers and users to anticipate all possible navigation paths. Very often a user find links are either irrelevant to his/her current task or that is beyond what his/her comprehension. It has been acknowledged that a user model based adaptive Web site may help navigating users in more cooperative way by adapting users’ background knowledge and task-related characteristics. During the past decade different types of Web sites were built with certain degree of adaptation. The purpose of an adaptive Web site is to provide relevant information to fit users’ interests and comprehension by presenting appropriate link structure or contents. The style of adaptations may fall into two categories: Explicit adaptation: the Web site solicits a user’s task-related information explicitly by dialog or questionnaire to creates a user profile. The system then forms a set of pages or a set of contents to be embedded into pages for the selected profile. This type of adaptation is usually always impractical as it is quite intrusive. Implicit adaptation: the Web site monitors a user's task-related behavior and provides the responses accordingly. The evolution of the user's preferences and knowledge can be partly inferred from a user’s browsing behavior such link selection and the sequence of the selection. Sometimes the system may prompt user input to get a more accurate impression of the user's task-related goals or preferences. This paper describes several issues related to the user model based adaptive Web site design, especially focuses on implicit adaptation. It includes the issues on constructing user model and creating adaptive page contents and links. It also discusses the use of current Web-technology for user modeling and adaptation. User modeling is a process of creating profiles of a user’s task-related characteristics. Ideally, a user model should include two profiles of user characteristics: (1) A profile of tasks that a user may perform including the user’s browsing strategy, goals and the ways of achieving the goals; (2) A profile of a user’s domain knowledge and preferences. For the system that supports implicit adaptation, the construction of these profiles should be based on the trace of the user browsing activities regarding three levels of Web contents: Fragment level: Fragments are considered as atomic units as far as the Web site is concerned. A fragment can be a subject, a paragraph of text, an image, a video clip or any type of link. Fragments may be static (stored) units of text or may be generated by an application-specific piece of software (like a natural language generation module) . The page level: A page is formed by fragments. In the Web site every page is a linear sequence of static fragments. These fragments are conditionally included. The domain level: a domain can also be described in terms of high level concepts. Relationships between concepts can be used to suggest desirable navigation paths. Since this description is at a high level, the navigation paths do not necessarily translate directly to hypertext links between pages . Each high-level link to a concept must be translated or resolved to an actual link to a Web page. Some concepts may be part of a composite concept. The composite concept hierarchy must be a directed acyclic graph, meaning that no concept may contain itself either directly or indirectly . The results of inference of user profiles based a user’s browsing activities forms a user model that may help the adaptive application to customize contents of Web pages. However, the following issues make the inference more difficult. The stateless feature of the HTTP applications allows the HTTP server receives a request, process it and then forgets about the client who made the request. No attempt is made to conduct a continuing session with a given client. The stateless characteristics pose no problems for Web applications that serve static contents. It is a problem for adaptive applications because the user modeling process needs a session to complete a profile. Some log files on the server site or cookies on the client site may help to collect the information about the context of user requests, but it brings up another issue: The hypertext feature of Web pages allows a user to click on any links that currently interests him/her that may turn out irrelevant. Furthermore, users’ information seeking behavior may yield a lot of uncertain and ambiguous information. It is very challenging for an adaptive system to mind the consistent information from system log files that may be full of a user’s arbitrary requests. There is a great deal of uncertainties in the trace of a user ‘s Web browsing activities. They include (but are not limited to):
The Relationship Between Corporate Governance and Financial Performance: An Empirical Study of Canadian Firms
Dr. Peter A. Stanwick, Auburn University, Auburn, AL
Dr. Sarah D. Stanwick, Auburn University, Auburn, AL
The focus of this paper is to examine whether good corporate governance yields higher financial performance than poor corporate governance for Canadian firms. Using the rankings of the Best and Worst Board of Directors ranked by Canadian Business in 2007, the results showed that overall board performance does impact firm performance. The results showed that firms with a high level of accountability of the board of directors had superior financial performance. The results also showed a significant inverse relationship between board independence and financial performance. The results demonstrated that corporate governance is critical in the ability of the firm to enhance their financial position. In addition, the results showed that the board must be accountable for their actions to ensure that the firm is able to achieve a strong financial performance. Furthermore, the results showed that boards that are dominated by insider board members yielded superior performance by the firm. With the ethical scandals occurring in the past two decades, the role of corporate governance has been gaining increased interest in academic research. While initially established as a legal requirement for incorporation, corporate governance has become a critical link between firms and stakeholders in the firm. Vinten (1998) states that corporate governance is needed not only to protect the interests of the stockholders but also the interests of all stakeholders. Corporate governance facilitates the ability to secure confidence for not only stockholders but also other stakeholders such as customers, suppliers, employees and the government in ensuring that firms are accountable for their actions. The dominant form of corporate governance for these firms is the board of directors. The purpose of this study is to examine the relationship between the performance of the board of directors and the financial performance of Canadian firms. A discussion of previous literature on: (1) the role of the board of directors, (2) the relationship between corporate governance and financial performance, (3) the relationship between board of directors and financial performance, and (4) the relationship between board of directors’ components and firm performance is provided. The study then continues with a discussion of the empirical results. It concludes with a discussion of the limitations of the study and suggestions for future research in this important area. The role of the board of directors has been examined extensively in past research as a basis for understanding the specific roles that the board assumes when aiding a firm. Some previous studies have shown the board to be ineffective in contributing any significant value to the firm (Mace, 1971; Vance, 1983; Wolsfon, 1984). The conclusions drawn from these studies show that in the past the board existed primarily as a “rubber stamp” body (Mace, 1971) which accepts at face value what is presented to them by management. The role of the board of directors has been separated into three different categories: (1) legal responsibilities, (2) resource dependence responsibilities, and (3) agency theory responsibilities. The legal responsibilities are dated from the introduction of publicly traded companies in the nineteenth century (Vinten, 1998). As part of the country’s laws of incorporation, the board of directors fulfills a requirement to represent the legal rights of the stockholders. It is the fiduciary responsibility of the board to ensure that stockholders’ interests are represented within the company (Williamson, 1964; Molz, 1988; Bainbridge, 1993; Cieri, Sullivan & Lenox 1994). The legal responsibilities of the board would include the selection and evaluation of the Chief Executive Officer and the evaluation of the performance of the firm. It is under this premise that the board is responsible for monitoring and controlling management if needed to ensure the interests of the stockholders are protected (Budnitz, 1990: Miller, 1993). This responsibility to ensure the interests of the stockholders are protected is extended in the agency theory approach to the Board (Jensen & Meckling, 1976; Fama & Jensen, 1983; Baysinger & Butler, 1985; Kosnik, 1987; Eisenhardt, 1989; Gilson & Kraakman, 1991). Under agency theory, the board of directors protect the stockholders interest by ensuring that the decisions made within the firm benefit the stockholder and not the self-interests of top-level managers. Agency theory is based on the relationship between management and the stockholders. The owners/stockholders of the company need to have a mechanism in place in order to control and monitor the behavior of the agents/managers (Alchian & Demsetz, 1972; Fama & Jensen, 1983). As a result, the board becomes the guardian of the interests of the owners. Therefore, it is the board’s responsibility to ensure that the decisions made within the firm will maximize the stockholders’ wealth (Mizruchi, 1983). In an overlap with some aspects of the legal responsibilities, the board’s responsibilities pertaining to agency theory would include decisions related to the selection, evaluation and compensation of the Chief Executive Officer and the evaluation of the actions of managers to ensure maximization of stockholder wealth and overall firm performance (Fama & Jensen, 1983).
Concentration in the Cable Television Industry from 1996 – 2008
Dr. Johannes Snyman, Metropolitan State College of Denver, Denver, CO
This article reviews changes in the market concentration of the cable television industry from 1996 to 2008. It first tracks the historical trend in market concentration from 1965 to 1995 and then proceeds with an analysis of the two most popular market concentration ratios, the CR4 and the CR8, as well as the Herfindahl Hirschman Index (HHI), which is the government’s preferred measure of market concentration. The CR4, CR8 and HHI increased significantly from 1996 to 2008. In 1995, the industry was a moderately concentrated oligopoly. During the year merger and acquisition activity started to increase significantly from 1994 due to the passing of the Telecommunications Act of 1996 and the development of digital technology. By 2008, the industry had become a highly concentrated oligopoly. However, Comcast, the number one ranked cable operator, won a significant lawsuit in 2009 over the FCC’s rule that placed an upper limit on the growth of any cable operator. The trend in tighter market concentration is predicted to continue. Concentration of ownership in the cable television industry has received the attention of government, media consumers, scholars and numerous consumer activist groups since the industry’s inception in the early 1950s. The primary reason for government interest in the industry has been a belief that the industry exhibits monopolistic characteristics due to high fixed cost which provides a natural barrier to entry and therefore needs to be regulated (Ford and Jackson, 1997). Media consumers have always had an interest in the industry but have shown increased interest during the 1990s and 2000s due to significant industry consolidation activities which resulted from the Telecommunications Act of 1996. A dramatic increase in cable prices, restrictions of consumer choice to a small number and variety of channels, low service quality and a refusal by cable operators to carry the networks of unaffiliated providers perked the interests of media consumers (Consumer Federation of America, 2004). Some scholars (Bagdikian, 2000; Schiller, 1989; McChesney, 2004) have also proposed that an industry that is dominated by a few large corporations will suffer from ownership diversity issues, economic efficiencies, social control over news, information and ideology, pricing policies and service quality. Several public interest and privacy groups have also focused on rising prices and service quality but recently lobbied Congress to investigate cable operators’ use of new technology called, “Deep Packet Inspection,” that discloses private and personal internet activity of consumers (Consumer Federation of America, 2008). Despite these concerns, the demand for cable television and related services has remained moderately strong throughout the history of the industry’s existence (Parsons, 2003). Over 15 years ago, the cable industry entered a period of extensive volatility and change. In the early 1990s technological innovation, namely the hybrid fiber-coaxial (HFC) cable that combines optical fibers and coaxial cables, now called broadband, and the first high speed, asymmetrical cable modem changed how cable companies sent signals to subscribers. In the late 1990s, digital technology was introduced to the U.S. market by Motorola. With this new digital technology, cable and telephone companies had the ability for the first time in history to compete directly with one another. The new technology made the analog phone and television set obsolete. Digital technology also allowed cable companies to provide high-speed internet, digital television, digital phone service, and HDTV. After carefully observing these events and through much lobbying by cable operators, Congress became convinced that in order to promote more competition between cable operators and telephone companies, barriers that kept them apart for years needed to be eliminated. They successfully passed the Telecommunications Act of 1996 with an implementation date of March 1998. Cable operators, driven by the belief that only the largest companies will survive the combined impact of the new digital technology and deregulation (Parsons and Frieden, 1998), took advantage of the benefits of deregulation and expanded their services and engaged in aggressive acquisition activities. In light of these dramatic changes in the cable television industry during the 1990s and 2000s, the purpose of this study is to examine ownership concentration of the industry. The last study (Chan-Olmsted, 1996) to include all the popular measures of concentration started in 1977 and ended in 1995. A more recent study (Parsons, 2003) started in 1962 and ended in 2000 but included only two of the popular measures.. The purpose of this study is therefore to update and expand both of these studies. The trend in market concentration from the early 1960s to the middle 1990s will be tracked first and then compared to the trend from 1996 to 2008. The type of competition of the industry at the end of 2008 will be investigated and the possibility for further increases in concentration will conclude the study. Two measures are commonly used by the government, economists, industry analysts and scholars to determine the level of concentration in an industry. They are the concentration ratio and the Herfindahl-Hirschman Index (HHI). The concentration ratio is expressed in the terms CRx where x represents the number of leading firms in an industry whose market shares have been added together. Although any number of leading firms can be used in the calculation of the CRx, CR4 and CR8 are the most frequently used concentration ratios. A CR4 of 75% implies more monopoly power under the control of the top four firms than a smaller CR4 and is therefore a useful measure of market concentration. However, the U.S. Department of Justice (U.S. Department of Justice, 1982), Federal Trade Commission and state attorneys general prefer the Herfindahl-Hirschman Index which has been in use since 1982. The HHI of an industry is calculated by squaring and summing the percentage market shares of competing firms in an industry for the purpose of determining antitrust development.
Similarities and Differences in the Strategic Orientation, Innovation Patterns and Performance of SMEs and Large Companies
Dr. Kamalesh Kumar, University of Michigan-Dearborn, Dearborn, MI
This study examined the similarities and differences in the strategic orientation and innovation patterns of SMEs and large companies and investigated their implications for company performance. Data collected over a two year period was analyzed to determine the strategic orientations of SMEs and large firms, in terms of the Miles and Snow typology. Results showed that while large firms operated with a “prospector” orientation, the vast majority of SMEs possessed a “defender” or “reactor” orientation. Results also showed that, while in general, SMEs had taken a defensive posture, introducing products that involve low novelty of innovation, a small number of SMEs were able to innovate successfully in all product categories. An ex post facto investigation revealed that these firms had, in effect followed an "open innovation model" (Chesbrough, 2003) that involve the use of external actors and sources to help them achieve and sustain innovation. Implications of the finding for SME's competitive strategy and future research are discussed. A survey of the strategy literature provides rather unequivocal evidence that a company’s strategic orientation plays a major role in its innovativeness and that innovation is a key driver of competitiveness and company performance. But there appears to be a relative lack of research that has examined the similarities and differences in the strategic orientation, innovation pattern and performance of small (companies with less than 250 employees) and medium (companies with less than 500 employees) enterprises (SMEs) and large companies within a single industry (Laforet, 2008; O’Regan and Ghobadian, 2005). Differences in the strategic orientation and innovativeness of the SMEs and large companies become particularly relevant, when these companies are operating in a dynamic market because the capability to adapt to changes in market can have a major effect on the profitability and even survival of companies. The purpose of the present study is to examine the similarities and differences in the strategic orientation and innovation patterns of SMEs and large companies within the same industry and to investigate their implications for company performance. Given the problem of balancing the benefits and costs of adaptability, and the fact that SMEs are known to approach changes in the industry environment in a tactical rather than strategic way (Laforet, 2008) it is possible that at one extreme, some SMEs may create a strategic orientation aimed at adapting to market changes, but at a significant cost, while others may focus internally on a narrowly defined product-market, but with the accompanying risk of failure to adapt to market changes, and the prospect of declining sales and profitability. The findings of this study will make contributions to both theory and practice. From a theoretical perspective, findings of this study should fill in a gap that currently exists in the literature by understanding the similarities and differences in the strategic orientation, innovation pattern and performance of SMEs and large companies in a dynamic industry environment. Results of this study will also provide some insight to managers of new food product development, concerned about low rate of innovation and high rate of failure of new food products (Boesso, Davcik and Favotto, 2009). This study is based upon sales data collected every two weeks over a two year period, relating to yogurt products introduced in the past five years in Italy. Recent product innovations in the Italian yogurt market is characterized by increased emphasis on the health benefits of the product and introduction of new kinds of yogurts that offer health benefits that reach beyond basic nutrition. Nearly all the competing companies have come up with products that involve various levels of novelty. It is common in studies of innovation to examine the differences in innovations based upon the degree of novelty. Also, a variety of empirical studies have shown that the level of novelty of an innovation strongly influences the company performance (Garcia and Calatone, 2002). Determining the level of novelty associated with new product introductions in yogurt is particularly important given this study’s focus on determining company’s strategic orientation and the relationship between innovation patterns and company performance. Following the guidelines provided by AcNielsen and other industry experts, the yogurt products introduced during the past five years in the Italian market was classified into for distinguishable categories (AcNielsen, 2005). In the first group are the “Natural Wellness” products that focus on reduced harmful ingredients (such as fat or sodium) and/or highlight healthful components (such as vitamins and minerals). Most of the products offered in this category largely involve redevelopment of old products to create new products, together with better labeling and highlighting of the health benefits. In the second category are products labeled as “Organic”. These products claim not to contain certain kinds of ingredients (such as growth hormones, antibiotics, genetically modified products etc), and thus meet the organic product standards. Once again the level of innovation novelty associated with such products is rather low. Products classified as “Natural Functional” (the third category) are enhanced or fortified with added supplements such as vitamins or acidophilus cultures/probiotics and claim to promote health benefits. These products are based on general research and knowledge and involve reformulation of existing ingredients and manufacturing processes. Finally, products in the “Clinical Functional” category make claims based upon active ingredients over which the company may have proprietary rights and the benefits of these ingredients have been tested either in the company’s own laboratories or by external research institutions in independent clinical studies.
Some Solutions for the Equity Premium and Volatility Puzzles
Dr. Jinlu Li, Shawnee State University, Portsmouth, Ohio
This is a short version of my original paper about the solutions of the equity premium and volatility puzzles. In this paper, I adopt an economic equilibrium model utilizing the framework introduced by Mehra and Prescott (1985) when they presented the equity premium puzzle. This model, in the long run and with respect to stationary probabilities, produces results that match the sample values derived from the U.S. economy between 1889 and 1978 as illustrated by the studies performed by Grossman and Shiller (1981), which includes the expected average, standard deviation, and first-order serial correlation of the growth rate of per capita real consumption and the expected returns and standard deviation of equity, risk-free security, and risk premium for equity. Therefore, this model solves the equity premium and volatility puzzles. I also explore the reasons why the equity premium puzzle was caused. In 1981, Grossman and Shiller studied the U.S. economy from the period 1889 through 1978, providing the average, standard deviation, and first-order serial correlation of growth rate of per capita in real consumption and the average returns and standard deviations of equity, risk-free securities, and risk premium in equity for this sample. Mehra and Prescott (1985) published a paper entitled The Equity Premium, A Puzzle, in which they formulized a very efficient economics equilibrium model by employing a variation of Lucas’ pure exchange model under an assumption that the growth rate of the endowment follows a Markov process. In that paper, they selected a case using two states of growth rates with a special symmetrical transition matrix for the Markov process. From this special model, after matching the average, standard deviation, and first-order serial correlation of the growth rate of per capita consumption from their model to the sample, they discovered that the average returns on equity, risk-free security, and risk premium from the model did not match the respective actual values from the sample. The differences, which were significantly large, formed the equity premium puzzle. It is apparently impossible for their model to match the standard deviations of equity, risk-free security, and risk premium to the respective values from the sample; and therefore it is impossible to match the volatility, which is a financial instrument refers to the standard deviation of the returns of this financial instrument within a specific time horizon. I believe that the general model with n states for growth rate introduced in Mehra and Prescott’s paper is a very efficient model to fit the purpose to match the sample data from this model in an economy, which includes U.S. economy from the period 1889 through 1978, if the states and their Markov processes transition probabilities are appropriately chosen. The reason why this puzzle was formed is that they considered a special case of this model that has two symmetric states to the average gross growth rate, which follows a Markov processes with a symmetric transition matrix. In this paper, I will examine their model and the techniques in order to gain a deeper understanding of the causes that formed the puzzle and will build a modified model by employing more states and creating more efficient techniques. So that this modified model and these more efficient technique perfectly reconcile the theory and observation to provide solutions for resolving the equity premium and volatility puzzle. Of course, as a result, the equity premium is resolved. In this paper, I choose a general three states model and a special four states model (in the original paper), which are different from the model used by Rietz (1988). Therefore, I will adopt all of the notation and terminology of Mehra and Prescott. This paper refers to this incompatibility of the standard deviations of equity, risk-free security, and risk premium of equity between the model and the sample the equity premium and volatility puzzle. The volatility of a financial instrument refers to the standard deviation of the returns of this financial instrument within a specific time horizon. This equity premium and volatility puzzle must be distinguished from the well-known volatility puzzle, which relates to the volatility and average returns for some financial instruments in a given period of time (see Chabi-Yo, Merton). A solution of the equity premium and volatility puzzle is described by an economics model from that the first moments and the second moments of the growth rate of per capita consumption and the returns on equity, risk-free security, and risk premium from the model match the respective actual values from the sample. Since the equity premium puzzle was presented in 1985, many papers have been published to explain or to resolve this puzzle (see references). To my knowledge, there is no published literature that attempts to solve the equity premium and volatility puzzle. In this paper, I apply the economics equilibrium model developed by Mehra and Prescott (1985) and the simulating techniques to construct two types of modified economics models: three state types and four state type. In each type, we will claim that there may be infinitely many different models to matching the average, standard deviation, and first-order serial correlation growth rate of per capita consumption and the expected returns and standard deviations on equity, risk-free security, and risk premium to the respective values of the sample. These matches are exactly matches instead of estimation. In each type, I provide one solution with all details to show the perfect matches and to demonstrate the satisfaction of all conditions stated by Mehra and Prescott. I also provide more solutions for each type without details. These solutions are perfect mathematical solutions of the equity premium and volatility puzzle under the sense of date matching. In each solution, the parameters for states and their transition probabilities may not satisfy some economists for explaining their economies. But this paper provides the techniques to solve the puzzle. I believe that if one uses a super computer and chooses more states, then one can get solutions to satisfy some economists’ various desires.
Partial Privatization in Mixed Duopoly
Dr. Najiba Benabess, Norwich University, Northfield, VT
This paper investigates the optimal partial privatization of a Stackelberg leader in a mixed duopoly. This paper builds from Matsumura’s duopoly Cournot model (1998) by comparing Cournot and Stackelberg models. In Cournot, partial ownership is optimal in a duopoly. In Stackelberg, partial privatization can be optimal but only when the government weighs consumer surplus more than profit in the welfare function. Indeed, for large weights on consumer surplus the optimal extent of privatization in Stackelberg actually. This paper examines the optimal partial privatization of a Stackelberg leader in a mixed oligopoly. Recently, the literature on “mixed markets,” markets involving both private and public enterprises has grown enormously (1). This reflects the fact that in many countries public firms continue to compete with private firms and that policy makers are often interested in the consequences of privatizing those public firms. The typical modeling assumption is that the public firm maximizes social welfare while private firms maximize their own profits. The influence of privatization often depends on the timing assumptions of the model. DeFraja and Delbono (1989) showed that welfare might be higher when a public firm is a profit-maximizer rather than a welfare-maximizer in a Cournot competition model. While this captures the market influence of privatization, they consciously ignore the effect privatization might have on improving productivity or lowering costs. Thus their contribution is to establish that the privatization of a public firm can be beneficial even when it does not reduce production costs. DeFraja (1991) examined the productivity consequences of privatization showing that the presence of even a relatively inefficient public firm in an oligopoly may improve the overall efficiency of the industry. In addition, the effect of privatization on social welfare in a wide variety of Stackelberg models in the literature showing that privatization of a public Stackelberg leader typically decreases welfare. (2) However, DeFraja and Delbono (1989), and others do not consider the possibility of partial privatization (1). Matsumura (1998) considered the possibility of partial privatization in a Cournot duopoly and shows that it is optimal for the government to sell part but not all of its shares in the public firm. He found that full nationalization, where the government holds all shares in the firm, never becomes optimal unless the public firm is a monopolist. He also suggested that full privatization, where the government sells all its shares in a public firm, is not optimal if the public firm has the same cost function as the private firm. Lee and Hwang (2003) show that the main result of Matsumura (1998) may not to apply if managerial inefficiency of the public firm is introduced. They showed that in a monopoly model (and also in a mixed duopoly market) partial privatization can be welfare-improving if there exists a tradeoff between allocative efficiency and production efficiency (eliminate managerial waste). In addition, Sun and Hang (2005) examined the optimal state share in a partially state-owned enterprise (SOE) from the perspective of a social planner and a transition-economy government that is under pressure to provide employment. They showed that when the SOE is cost inefficient relative to the private firm, the effects of employment burden on the optimal state shares are different between the government and social planner. The government’s optimal response to increasing employment pressure is to raise state shares, while the socially efficient solution implies a reduction in the state share. They also showed that as tariffs fall and foreign competition intensifies, the social planner always wants to reduce the state share but the government does not want to do that if the SOE is sufficiently inefficient. This paper builds on this literature on partial privatization by determining the optimal extent of privatization for a public Stackelberg leader. I assume that a mixed ownership firm maximizes a weighted average of the payoff of the government and its own profits, and that the weight reflects the proportion of shares held by the government. This paper is motivated by the fact that in many actual cases, the government holds only a proportion of shares in the public firms. There are many firms with a mixture of private and public ownership. Table 1 shows the extent of government ownership in selected international airlines in 2000. Indeed, there are more than 40 airlines with partial government ownership (Doganis 2001). In the case of many of the partially privatized airlines, they remain national sales leaders the likelihood that a partially privatized firm remains a leader motivates the need to examine the Stackelberg model. Since firms with mixed ownership must respect the interests of private shareholders, they cannot be pure welfare-maximizers. At the same time they must respect the interests of the government, so they cannot be pure profit- maximizers. By controlling the shares that the government holds, it may be able to indirectly control the activities of the privatized firm. In such situations it is important to consider the optimal proportion of shares that the government should hold. Moreover, examples such as Volkswagen with its partial ownership by the German state of Lower Saxony show that such reasoning applies even to major manufacturing firms competing in a global market.
An Empirical Investigation of the Relation Between Emotional Intelligence and Job Satisfaction in the Lebanese Service Industry
Raghid Al Hajj, Lebanese American University, Beirut, Lebanon
Dr. Grace K. Dagher, Lebanese American University, Beirut, Lebanon
The escalating popularity of the Emotional Intelligence concept along with the increase in life/work outcomes that it is supposed to affect has lead researchers to scientifically investigate its relation to several of these outcomes. The purpose of this study is to investigate the role of emotional intelligence as a determining factor in several facets of job satisfaction among employees within the service sector. The results of this study provide insights to both practitioners and academicians on how employees’ attitudes can be influenced by non-cognitive factors. Further future studies and limitations of the study are discussed. Recently the concept of emotional intelligence (EI) has received a great deal of interest and fuelled much controversy. If anything, the interest in EI seems to be on the rise, and the unusual speed by which research on this subject is growing maybe addressed to the reason that scores on EI measures are associated with a number of real life outcomes (Grewal and Salovey, 2005). EI has been linked to a variety of outcomes from health to career success and life satisfaction etc... (CartWright and Pappas, 2008). In an organizational setting, the interest in EI is a result of the desire to interpret the differences in occupational success between people which cannot be explained by existing cognitive measures such as IQ alone (Zeidner et al., 2004). Several empirical studies have shown that IQ and other tests for cognitive ability account for a no more than 25% of the variance in work performance outcomes (Cherniss et al., 2006; CartWright and Pappas, 2008), other studies push the number down even further to 10% as a more realistic number (CartWright and Pappas, 2008). The low predictive power of cognitive tests and other problems accompanying them (e.g. group differences) sparked interest in non-cognitive predictors for personnel selection and attitudes. Personality tests were proved beneficial to use, but when used in isolation their validity was even lower than cognitive ability tests (Van Rooy et al., 2005). EI seemed to be a powerful tool as a growing body of recent research based on different models of EI suggests an incremental validity of EI over both traditional cognitive intelligence tests and personality tests (CartWright and Pappas, 2008). The use of EI measures for placement and selection is gaining momentum as companies realise the potential value of EI skills. The American Society for Training and Development estimates that four out of five companies are trying to improve productivity, customer service and manager performance by increasing their employees’ EI (Zeidner et al., 2004; CartWright and Pappas, 2008). The purpose of this study is to investigate the role of emotional intelligence as a determining factor in several facets of job satisfaction among employees within the service sector in Lebanon. The objectives can be summarized as mainly: providing a comprehensive study of emotional intelligence and job satisfaction as concepts, and examining the relationship between employee emotional intelligence and factors of employee job satisfaction. There have been many definitions of EI each based on the different understandings of what the concept is. In 1990 Salovey and Mayer defined EI as “ the subset of social intelligence that involves the ability to monitor one’s own and others’ feelings and emotions, to discriminate among them, and to use this information to guide one’s thinking and action” (Salovey and Mayer, 1990, p. 189). Later Mayer and Salovey introduced a refined definition of EI as “the capacity to reason about emotions, and of emotions to enhance thinking. It includes the abilities to accurately perceive emotions, to access and generate emotions so as to assist thought, to understand emotions and emotional knowledge, and to reflectively regulate emotions so as to promote emotional and intellectual growth” (Mayer and Salovey, 1997, p.10). This later definition is perhaps the most widely accepted scientific definition of EI and perhaps the most workable contemporary definition of EI (Zeidner et al., 2004). Another definition of EI was presented by Goleman in his bestselling book “Emotional Intelligence” in which he defines EI as consisting of “abilities such as being able to motivate oneself and persist in the face of frustration, to control impulses and delay gratification, to regulate one’s moods and keep distress from swamping the ability to think, empathise and to hope” (Goleman, 1995, p.34). As part of his definition, Goleman describes over 25 learned competencies, skills and abilities which constitute EI (CartWright and Pappas, 2008). Further refinements to this definition broadened the concept to include a wide range of personality characteristics and behavioural competencies giving the notion that Goleman defines EI as any desirable feature of personal character not represented by cognitive intelligence (Zeidner et al., 2004).
Influence of Country Culture on Bankruptcy and Insolvency Legal Reform Management
Dr. Enoch K. Beraho, South Carolina State University
Different countries employ different bankruptcy and insolvency approaches when trying to solve their economic problems. The purpose of this paper is to study the influence country culture and legal systems have on bankruptcy management in different countries. Bankruptcy data and information pertaining to legal practices of different countries were obtained from those countries’ websites and various published documents. The data were examined and compared, noting differences in legal structures among the countries studied. It was found that, whereas the aim of bankruptcy laws was to remedy the countries’ economic problems, the approaches taken differed markedly. In many cases, it was difficult to access bankruptcy data mainly because such data were published on Internet and no other reliable documents were available. To the author’s knowledge, it seems very little specific work, if any, has been done in the areas relating to country cultural values, bankruptcy practices and management. In that sense this study is warranted. Furthermore, effective bankruptcy management across nations may benefit from this exposure, leading to more realistic reforms across the globe. Over the last decade, the world has experienced economic difficulties. But some countries experienced more financial stress than others. In response to these economic hardships, most countries responded by reforming their legal systems to cope with their domestic financial problems. Even countries like China which had socialist practices began to reform their legal systems to allow market forces to play out in order to gain confidence of foreign investors (Eisebach, 2007; Dobbs, et al, 2004). The US, Canadian and British systems are much more detailed and expansive than those of Eastern Europe, China, S. Korea and Malaysia (InterNet BL, 2007; Chung, 2007; Zhou, 2006). Different countries emphasize different bankruptcy practices, consistent with their social and legal systems. The extent of those practices seem to be related to the stage of development of those systems. What is apparent is that no system is static. There are changes in virtually all countries, although different counties have initiated more changes than the others. The differences in reform approaches are expected because the historical differences among the regions of the world. To highlight the pace of reforms in different countries, a few major countries were selected to exemplify regional differences in general and national trends in particular. The countries chosen represent different economic and political structures. Martin (2005) studied the reasons why so many nations have hastened the pace of bankruptcy law reforms in their respective countries. The author found that there is always some kind of economic crisis. Following that realization, countries ask themselves whether or not current legal systems can adequately handle the increase in corporate and bank failure. Failure does not have to occur, before reform is called for. Credible indication that the national economy is in trouble or would soon in trouble is enough to trigger movement towards reform. Although the Romans and the English are credited with origination and evolution of the bankruptcy legal systems, it seems that the notion of bankruptcy predates even the Roman bankruptcy procedures. If the bible is looked as a legitimate compilation of historical events, it becomes the earliest document on bankruptcy as a management tool to control the economy and allow citizens and businesses to avoid financial catastrophe and have new beginning. In the book of Deuteronomy, Chapter 15, Verses 1-2, it is written that Mosses brought home God's law, from the mountain of the burning bush, to the Israelites and counseled them to forgive debts every seven years. Mosses counsel was “At the end of every seven years, thou shalt make a release (legalhelpers.com). Amazingly, Mosses also describes a system of redemption after foreclosure that is very similar to those in effect today (bankruptcyrep.com). Bankruptcy contemplates the "forgiveness" of debt. The Bible, likewise, contains debt forgiveness laws. Under U.S. law, a debtor may only receive a discharge of debts in a Chapter 7 bankruptcy once every eight (8) years. Under Biblical law, the release of debts came at the end of seven (7) years as stated below (legalhelpers.com; Tozer and Lofstedt, 2007): Modern bankruptcy laws, like the Biblical provision above, allow debtors to keep certain property when they file bankruptcy. This gives debtors a fresh start and discourages debtors from going into debt-bondage again, after the bankruptcy is over, in order to survive (Tozer and Lofstedt). Bankruptcy has been around for over two thousand years. After Mosses’ bankruptcy command, the Romans have the first written history on the subject. In ancient Rome or Italy (later), money lenders conducted their trade from benches set up in town squares. Ancient records show that any merchant who failed to pay another merchant had his bench broken, sometimes over his head. This practice was meant to put the merchants out of misery of owing debt, by just forcing them out of business. The custom of breaking the bench became prevalent and insolvency became associated with a broken bench, banca rotta, in Italian, eventually became bankrott in German, banqueroute, in French and bankrupt in English. Later on, during the Middle Ages, the incidence of bankruptcies increased and prompted the need for an organized bankruptcy procedure.
John Andrews: After Success, Then What? [Case Study]
Prof. John Hulpke, Hong Kong University of Science and Technology, Hong Kong
Cubie Lau, University College Dublin, Ireland
John Andrews could have relaxed, and continued as a very successful human resource manager of prestige hotels in Asia. He was now early middle aged. He was not particularly wealthy, but was financially secure. He did not feel that it was time to retire back in his native land, the U.K. But there was the nagging feeling that there might be more to life than “success.” After success, then what? The challenge turned into an opportunity: invent a socially responsible resort hotel in an area needing help. Again, ideas plus enthusiasm and energy created a beacon of light, making a difference. Mission accomplished, Andrews stepped aside, letting the local staff sustain the effort. But for Andrews, again the question: after success, then what? As Hong Kong-based director of Human Resources for a truly world class group of hotels in Asia, John Andrews was in an important and fulfilling job. The road up to this point had been long and for more than a dozen years he had worked really long hours, worked really hard, and the promotions came. But now, Andrews asked himself: is this what life is meant to be? He reflected on the paths that led him to this position, in this industry, and in this place. What led him here? Andrews grew up in the hotel industry in the United Kingdom. He graduated with diplomas in Hotel and Catering studies from University College Birmingham in their program on hospitality and tourism management. After graduation, while still in his twenties, he trained at the famed Waldorf Hotel in London, working up from trainee to positions of responsibility in the Food and Beverage (F&B) area of this prestigious hotel. A chance meeting led to an opportunity to leave London for an exciting new destination: Bermuda. After several great years at the Elbow Beach Surf Club in Bermuda, again opportunity unexpectedly knocked. One acquaintance introduced another, and soon it was off to new lands. Andrews moved to Canada, initially working with Hyatt. Then while still in his early thirties, still in Canada, he had the chance to help open a brand new Westin Hotel. In Westin opportunities to advance came up, and he then worked in Westin properties across Canada, in Calgary and Vancouver. His work with Westin led to yet another new land: this time, the enchanting Orient. Andrews moved to the Kowloon Shangri-La in Hong Kong. At the Shangri La his “hands on, engage all employees” type leadership paid off, helping the hotel achieve many successes. The next step was as general manager of a Westin property in Pusan, Korea. Although still a young man, in his early forties, he had reached already reached the top, and could look back at a fulfilling twelve and a half years career with Westin. But there was an itch, a desire to do more. For one thing, after having lived and worked in Hong Kong, Pusan Korea did not seem to be where Andrews really wanted to be. He left Korea, and left Westin, to return to Hong Kong. Following his return to Hong Kong, Andrews started his own hotel consulting practice, providing professional management consulting and services to hotels and service industries in the Asia Pacific region. One fascinating part of this consulting was checking quality control of client hotels by staying in hotels as “mystery guest,” similar to the “mystery shopper” idea used in upscale retail organizations. Many people would dream of such a job: traveling to exotic places from Bali to Bangladesh, staying in five-star hotels, and being paid to do so! But life on the road is still life on the road: after a few years, when an opportunity came to take a job back in Kong Kong, with less travel, Andrews decided to drop his own business and return to the corporate world. He accepted an appointment as Vice President Human Resources with the prestigious Marco Polo Hotels group in their corporate office in Hong Kong. While keeping employed in the industry, Andrews also became active in industry trade groups. For example, he was a long time member of associations such as International Hotel & Restaurant Association. For a time, Andrews served as Director of Asia Pacific Affairs for this association. He was also a member of the Board of Directors of the Indian Ocean Tourism Organization. The career progression was impressive. From an outside observer’s perspective, British-born John Andrews had it made. He had worked his way up over the decades from Desk Clerk to Catering Supervisor to Chef to General Manager, Andrews had gone through all the chairs. He knew the industry, and he knew Asia. And now, not only had he been a hotel General Manager but he was now in charge of human resources one of the finest hotel chains in Asia, working in a city known for great hotels. And things were going well. There was not a complaint in the world on the professional life. Even work-life balance seemed right. Sure, he worked long and hard, but managed time for family and friends. Keeping fit, even keeping in great shape, came naturally, and Andrew enjoyed his time in the health club. Andrews was also a key player in one of Asia’s oldest and most prestigious Rotary Clubs, getting involved in the many charitable programs that Rotary is famous for. Andrews also made time to assist and advise a community organization offering counseling and other support to troubled or vulnerable teenagers. Andrews was living a full and meaningful life. The phrase “if it ain’t broke, don’t fix it,” comes to mind.
Employer’s Perceptions of Online vs. Traditional Face-To-Face Learning
Marzie Astani, Ph.D., Professor of MIS, Winona State University, Winona, MN
Kathryn J. Ready, Ph.D., Professor of Management, Winona State University, Winona, MN
Online learning has increased dramatically during the last decade. Online universities have grown in popularity, while traditional universities and technical colleges struggling with declining financial support, have embraced online courses, and in some cases complete online programs. This exploratory study provides students, business faculty and institutions of higher learning insight into employer’s perceptions of online learning. Overall, employers favor online learning due to the flexibility presented. Employers stated that they will recommend online courses to their employees to further develop skills. Yet, despite their positive perceptions, employers were uncertain about whether an online degree was comparable to a traditional face-to-face learning degree and were uncertain about hiring someone with an online degree if the position required a college degree. The growth of online education has been widespread during the past decade. Pethokoukis (2002) reported a 33% per year increase in online enrollments in the U.S. through 2002. In a 2005 report of online education in the U.S., the overall enrollment growth rate was reported as 18.8%, which exceeded the overall growth rate in the higher education student body (Allen & Saunders, 2005). The largest increase (72%) was for associate degree institutions. In the same report, it was shown that sixty-five percent of schools were offering traditional graduate programs along with online courses and sixty-three percent of traditional undergraduate programs offered online courses. Overall, fifty-six percent of higher education institutions identified online education as a critical long-term strategy. The rapid growth of online learning puts an immense pressure on all educational institutions to seriously consider online courses and programs as part of their strategic planning. Advantages of online learning, such as learning anytime, anyplace have been noted in several studies (Aggrawal & Bento, 2000; Maeroff, 2003; Pittinsky, 2003). However, the research community has raised concerns related to a potential compromise in quality and learning experience for online versus traditional face-to-face learning (e.g., Robinson & Hullinger, 2008; Carr-Chellman, 2006). In response to these concerns, some researchers, for example, Palloff & Pratt (2001) concluded that there is no significant difference in the learning outcomes of students in online and face-to-face settings. Others reported that online students demonstrated higher levels of engagement than those of traditional learners, and students gained knowledge and acquired skills that facilitated their understanding of real-world and job-related problems (Robinson & Hullinger, 2008). In addition, some researchers concluded that the online environment facilitates lifelong quality learning (Aggrawal and Bento, 2000), which contributes to students’ daily lives (Brown & Ellison, 1995). El-Khawas et al. (2003) surmises that technology employed in online environments enriches learning and creates meaningful experiences that contribute to learner’s growth and development. Several studies focused on the positive experiences of the target learner such as not spending time to drive to class, the flexibility to work at his/her own pace, more course availability, and interacting socially with others with decreased inhibitions (Beard & Harper, 2002; Carrell & Menzel, 2001; Simmonson, 2005). However, Jackson & Helms (2008) found that the perception of the external community, such as employers, about online learning was an important issue to students. Students were reportedly concerned about the prestige of online learning in the external community and whether employers view it as a comparable learning experience compared to traditional learning. Hamzaee (2005) recommended that educational institutions should seek employers’ perspectives about online learning. This study is intended to explore the validity of students’ concerns regarding employers’ perception of online learning. A brief review of the literature is presented, followed by the methodology and results. The discussion and conclusion highlight the major findings. In spite of studies showing that many universities have implemented online learning, some researchers question the learning experience and try to find indicators to determine the quality of online learning (Gratton-Lavoie & Stanley, 2009). Concerns about the potential substandard learning experience and the external community’s low opinion of online learning have been expressed in several studies (e.g., Carr-Chellman, 2006; Jackson & Helms, 2008). Some studies have cited several disadvantages in online courses, including lack of interaction (student-to-instructor and/or student-to-student), privacy issues, technological difficulties, and a focus on technology rather than content (Beard & Harper, 2002; Plotrowski & Vodanivich, 2000). The lack of interaction is the most common concern mentioned by researchers. In an online environment, learners may feel isolated since most of the online learning environments are asynchronous, thereby allowing learners to participate from different locations. Therefore, it is crucial to provide meaningful instructional interactions among participants and between participants and objects in their environment (Wagner, 1994). To address these concerns, an examination of the literature comparing online learning with traditional face-to-face education is conducted.
A Survey on HIV/AIDS, Health Status and Economic Growth
Dr. Juan J. DelaCruz, Lehman College (CUNY), Bronx, NY
This essay reviews the literature on growth theory, analyzes the importance of human capital and portrays the role of health as a determinant of economic performance. This survey also introduces the impact of a particular disease, HIV/AIDS, on the economy and includes empirical work explaining the interrelationship of health and income per capita, which is measured using a multivariate framework controlling for other environmental variables. Good health improves economic performance whereas ill health (using HIV prevalence as a proxy) deteriorates human capital, negatively affecting income per capita across countries. Ever since Ramsey linked the optimal saving rate for an economy to the introduction of human capital in the economic literature, our ability to explain economic phenomena moved to a higher level. The evolution of growth theory can be tracked in diverse ways. The first tendency is associated with the Harrod-Domar model, with its explanation of economic growth in terms of the level of savings and productivity of capital while the second and third have to do with further developments in the neoclassical growth representation and responses to omissions and deficiencies in the prevailing neoclassical model respectively (Solow, 1994). This paper reviews the current literature on growth theory and the specific role of health as one of its multiple determinants. It quantifies the interrelationship between economic growth and human capital in the form of health status (life expectancy as a proxy), serving also as an introductory analysis to the impact of the HIV/AIDS epidemic (the lack of health) on economic performance. Empirical evidence on the effect of health and burden of disease (HIV prevalence) at different levels of per capita income within a sample of 86 industrial and developing countries (see appendix at the end of the paper) has been found using an instrumental variable approach. The results reported in this paper are consistent with other findings that show strong evidence of a robust relationship between different levels of health status and real income per capita in a cross-section of countries. In order to overcome the problem of simultaneity between health and economic performance, an instrumental variable approach is needed to break feedback effects. Human capital has evolved into one of the most universally accepted concepts in economics as well as in other social sciences. This innovative theoretical construction has its roots in the neoclassical growth model that quantifies the sources of economic growth in market economies (Ehrlich, 2007). However, the sources of economic growth (labor and physical capital) did not account for large residuals; therefore the inclusion of different forms of human capital was necessary to provide a complete understanding of the problem. A debate relevant to the human capital field was held in a special edition of the Journal of Political Economy (1962), where Schultz, Becker, Denison and Mushkin among others developed a theoretical framework for this concept. Also, seminal papers by Hicks (1979), Wheeler (1980), Knowles and Owen (1995), Barro (1996), and Bhargava (2002) provide important elements in building an analytical framework for empirical work on the leading role of health as a determinant of economic growth. The neoclassical growth model has been expanded by the notion of human capital accumulation (Augmented Solow Model), arguing that it is necessary to account for other forms of capital accumulation besides physical capital. As a recently invented concept, human capital changes the theoretical view of the growth process and the empirics of the analysis of cross-country differences (Mankiw, 1992).Grossman (1972) made a major contribution to the literature on health and economic expansion, designing a growth model that included health as one of its determinants. In this set of circumstances, health can be regarded as a durable capital stock that produces an output of healthy time, depreciates with age, is increased by investments, and depends on different levels of income. In Grossman’s work (1972), health determines time spent producing earnings and commodities, whereas the stock of knowledge (education) affects market and non-market productivity. Taking health as a main variable, Weil (2007) and Weil and Shastry (2003) developed a basic model of economic growth and human capita suggesting that people with higher income can afford better medical treatment, so an improvement in health status increases potential output but also economic growth inducing an accumulation of greater health capital. A small number of health conditions are accountable for a high share of the wellbeing shortfall worldwide. As one marker, HIV impacts population in their fruitful years, therefore the importance of both government and private sector interventions because effective institutional and market responses should exist to prevent and treat this condition in order to raise the health capital and avoid its rate of depreciation of labor. The World Development Report (1993) recognizes that while the HIV/AIDS epidemic is clearly a health problem, the scientific community has come to realize that it is also a development problem that threatens human welfare, socio-economic advances, productivity, social cohesion, and even national security. HIV/AIDS reaches into every corner of society, affecting parents, children and youth, skilled and unskilled workers, rich and poor, decimating the adult population in the prime of their working and parenting lives, decreasing the workforce, impoverishing families, and impacting communities. The loss of skilled workers undermines productivity and threatens the capacity of a nation to produce more goods and services, just as business costs are increasing. At the same time, tax revenue, market demand and investment are also damaged. In general, the existing literature on the economic impact of HIV/AIDS has elicited mixed results.
Contingent Liability and Stock Price: Evidence from Listed Companies on the Stock Exchange of Thailand
Sasisha Chiewcharnpipat, M. Accy., ExxonMobil Limited, Bangkok, Thailand
Dr. Uthai Tanlamai, Chulalongkorn University, Bangkok, Thailand
This research examines the relationship between contingent liabilities and stock price. Contingent liabilities in this study pertain to those that once incurred will become expenses of a company. The three types of contingent liabilities included in this research are Guarantees to others, Lawsuits, and Discounted accounts receivables or discounted post-date cheques. This study uses a triangulation research method for the analyses of data in order to ensure the reliability of the study results. The sampling frame comprises 210 listed companies of all sectors on the Stock Exchange of Thailand (SET), excluding the financial and restructuring sectors. This study uses quarterly data collected during the years 1998-2005 and the total units of analysis are 6,720 quarter-firms. All three methods of analysis give substantial evidences to support the relationship between contingent liabilities and stock price. Today’s freer capital markets have witnessed the emergence of financial statements as one of the most important tools for fundamental equity analysis. Each important element of the financial statement reflects a different aspect of a company: the Balance Sheet channels the firm’s financial position, the Income Statement, operations, and the Statement of Cash Flow the change in the financial position over an accounting cycle. The notes to the financial statement, on the other hand, show supplemental information, including the company’s accounting method, risks and uncertainties, and hidden assets and liabilities. When deemed only as supplemental information, Contingent Liabilities are typically reported as a mere note to the financial statement. According to the Federation of Accounting Professions, a self-regulated organization in charge of issuing Thai Accounting Standards (TAS), the disclosure of contingent liability follows TAS#21 (1993) that was replaced later by TAS#53 (2003). However, management can decide whether or not such liabilities meet the disclosure requirement of conventional accounting standards. This kind of executive discretion can easily lend itself to the practice of creative accounting. More recently, Contingent Liabilities have received increasing attention following the notorious cases of Adelphia Communications, Enron and WorldCom, to name but a few. These companies used creative accounting to hide such liabilities from their balance sheets and gave equity investors a false image of the firms’ financial health. Having failed to scrutinize the fine print of these financial statements, investors were surprised by the firms’ markedly different financial positions when the contingent liabilities kicked in. Of importance to equity investors, the realization of Contingent Liabilities impacts not only the firm’s liabilities but also other items in the financial statement. Some may affect assets, such as unutilized credit facilities, while others may alter the firm’s expenses in the form of business-to-business guarantees and lawsuits. When expenses rise, Net Income is dampened. Investors generally pay particular attention to the Net Income figures in fundamental equity analyses. Net Income has been used to forecast the stock’s rate of return from discounting future cash flows and future cash flows (Keorath, 1996). Therefore, financial performance ratios and any subsequent predictions will inevitably be affected by the volatility of contingent liabilities and the viewpoint of equity investors. Despite the clear and worthwhile merits of understanding the relationship between Contingent Liabilities and equity performance, existing literature reveals very few studies that have been done on the topic. These studies have found conflicting results. Lacy (2002) found in his survey research that most financial statement readers paid very little attention to contingent liabilities because these accounting items are highly uncertain, both in terms of the probability of occurrences and impact value. Sukchoksuwan et al (2002), on the other hand, found that creditors paid considerable attention to Contingent Liabilities and would prefer more disclosure on this item whereas investors were less critical and tended to request only standardized reports. Although it is conceivable that Contingent Liabilities can affect stock’s performance, more empirical research should be done to determine the direction and the extent of relationships between these two constructs.
A New Engine for Growth? An Exploratory Study of the Multimedia Super Corridor (MSC) Creative Multimedia Cluster
Kamarulzaman Ab. Aziz, Multimedia University, Malaysia
The importance given by local authorities and governments to the creative sector can be seen from the number of creative city/region strategies such as Creative Baltimore, Creative Toronto, Creative Sheffield, Creative New York, Create Berlin and Creative London. (Foord, 2008) Thus, it is not surprising to see this trend start to take effect in developing countries such as Malaysia. In order to accelerate the realization of Vision 2020 (to transform Malaysia into a knowledge based society) via the Multimedia Super Corridor, a path was defined through six innovative Flagship Applications. However, recently a new focus area has been identified and given priority by the government. The creative sector had been identified as having significant potential that can be the new engine driving the nation’s transformation. The MSC Malaysia Creative Multimedia Cluster initiative was designed to create an environment for companies to leverage on leading-edge technologies to design, produce and deliver various products and services to help bridge the digital divide, enhance the use of technology within the wider creative industry specifically and the nation in general, to develop local expertise capable of producing “global” Malaysian content as well as driving creative IP creation in the country. This study aims to look at how the creative multimedia cluster initiative was implemented and evaluate the cluster performance using a holistic framework. Multimedia Super Corridor (MSC) Malaysia is a policy-driven cluster-oriented initiative launched in 1996, aimed to help the country on its transition from an industrial society to a post industrial one. The plan was laid out in the mid-1990s through initiatives under the Third Outline Perspective Plan (OPP3) and Eight Malaysian Plan (8MP). ICT had been identified as the key enabler towards achieving this vision. Multimedia Super Corridor (MSC) is one of the major efforts in this regard. It is positioned to provide means for sustainable and rapid growth of the Information and Communication Technology (ICT) sector in Malaysia and thus, accelerate the realization of Vision 2020. A path was defined through six innovative Flagship Applications. These applications have three main objectives: 1) to increase the country's economic productivity; 2) to decrease Malaysia's digital divide, especially between urban and rural areas; and 3) to offer concrete ICT-oriented business opportunities for domestic and international companies. In order to achieve the above, six innovative flagships were identified, each flagship focus on a primary area of multimedia applications identified as high potential sectors designed to generate industrial growth and development in the MSC (Ab. Aziz et al., 2002). The Flagship Applications are categorized into two major categories (Ab. Aziz et al, 2004); 1) Multimedia Development Flagships and 2) Multimedia Environment Flagships. Multimedia Development Corporation (MDeC) based in Cyberjaya, is the organization mandated by the government to oversee the development of the MSC Malaysia project. Initially a government-owned corporation but now incorporated under the Companies Act, MDeC facilitates applications by multinational and local companies to re-locate to MSC Malaysia, to gain MSC status and thus the set of incentives or benefits. MDeC globally markets the MSC Malaysia, shapes MSC Malaysia-specific laws, policies and practices by advising Malaysian Government and standardizes MSC Malaysia’s information infrastructure as well as the urban development. The paper examines the development of the MSC Malaysia creative multimedia cluster. This paper will first review relevant selected literature providing the theoretical foundations for the study. This will then be followed by a discussion of the methodology used for developing the case study. The paper will then ends with the case study on the MSC Malaysia creative multimedia cluster. The interest in the creative class, sector and industries is largely attributed to the work by Richard Florida (2002). He suggested that the industrial economy was declining and being replaced by the creative economy, noting the fact that then the creative class had already made up 30% of the workforce in US and earn 50% of all salaries. In this economy, a city/region needs to be able to attract the creative class in order to be competitive. Florida defined the creative class to be a broad set of qualified professionals ranging from mathematicians to high level commercial positions to academicians, lawyers, public administrators, artists, and “bohemians” as well as other cultural professions. Successful cities characteristically are those with an open attitude towards newcomers, offer variety of good quality lifestyle amenities (including culture, environmental beauty, historical heritage, etc), view diversity as a strength, and have ample number of outlets for the class’s creativity. Houston et al (2008) warned against dismissing the role of talent attraction programmes as these could prove to be effective in bringing in the desired or needed talents. Wiesand and Sondermann (2005) defined the European creative sector into three main grouping of l) mainly commercial activities – applied arts (architecture, games, design, etc), culture and media industries, and related industries/crafts; ll) mainly non-profit and informal activities – informal activities (amateurs, etc), support and services (foundations, associations, etc); lll) mainly public funding – public or subsidized arts, media and heritage bodies (museums, theatres, etc), public administration and funding, cultural education and training (art academies, music schools, etc). The three groups are supported by a core group of arts workforce (independent and employed artists, media freelancers, etc). The sector is further characterized by inter-linkages within as well as outside the sector, such as the links between music labels, instrument makers and music schools, or the network of design houses with the fashion and furniture making industries. The linkages are also transnational which can be seen especially in the music, film, and digital media businesses.
Gender Differences in E-Government Adoption in Dubai
Dr. Jawahitha Sarabdeen, University of Wollongong in Dubai, Dubai, UAE
Dr. Gwendolyn Rodrigues, University of Wollongong in Dubai, Dubai, UAE
Over the last decade e-government services are transforming nations. The success of such initiatives depends on the willingness of citizens to adopt e-government services. Researchers have identified a number of factors that have an impact on the willingness to adopt e-government services. However, gender differences have been neglected by researchers. This paper uses an extended Technology Acceptance Model (TAM) and constructs which include ‘perceived security and trust, ‘perceived quality’, ‘perceived usefulness’ and ‘perceived ease of use’ to build a model to show gender differences. The proposed model reveals that there are similarities and differences between male and female willingness to use e-governments services. The study uses statistical tools like factor analysis, regression analysis in order to identify and validate the relevant factors. The findings of the study have practical implications for designing e-government services. E-government services have grown rapidly during the last decade. There is no debate on the importance of e-government services in transforming social, economic and technological life of nations. E-government services in Dubai aims at transforming the efficiency, effectiveness, transparency and accountability of informational and transactional exchanges within government, between governments and government agencies at the federal, municipal and local levels and also between citizens and businesses; and to empower citizens through access and use of information. The success of such initiatives is dependent not only on the government support, but also on citizens’ willingness to accept and adopt those e-government services.(Carter & Belanger 2005). A gap exists between potential usage and actual usage of electronic public services. Deursen et al. (2006) investigated the factors influencing this gap. The findings have shown that population doesn’t have sufficient motivation for using computers and the Internet, insufficient digital skills produce serious problems, but the striking fact is that government doesn’t know what citizens want, how they use ICT and what the consequences for citizens are. Most e-government initiatives have been designed without recognizing that women and men everywhere have different patterns of usage with computers and the internet. Advanced as well as developing countries have paid insufficient attention to gender analysis and therefore e-government services are introduced on the assumption that men and women have equal access and similar needs. This study proposes to focus on gender differences because the UAE and the Gulf countries follow a patriarchal system and therefore the researchers were interested in identifying aspects of gender differences which could form stumbling blocks in the way of a willingness to adopt e-governments and thereby restricting the progress of the government to a fully functional e-government. Research on adoption of e-Government services was mainly based on theories such as the Technology Acceptance Model (TAM) (Davis 1989), the Diffusion of Innovation (DOI) (Roger 1995), and the Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh et al., 2003). These studies have identified factors like perceived usefulness, ease of use, perceived risk, trustworthiness, compatibility, external influence, Internet safety, interpersonal influences, relative advantage, image and facilitating conditions as factors that affect the willingness to adopt e-government services. These factors have been used by a number of researchers. (Huang et al., 2002; Carter & Belanger 2005; Hung et al., 2006, Belanger & Carter,2008; Colesca and Dobrica 2008, Teo et al., 2008, Colesca 2009). Among the personal characteristics, studies on the gender differences in adoption, Akman (2005) shows how gender and education result in differences in the level of adoption of services whereas experience and skill are emphasised in the research done by others(Chaudrie et., al., 2005; Dossani et., al., 2005; Piling and Boeltzig 2007). Comparing e-government usage, Venkatesh, et al (2003) show that, men tend to be more highly task-oriented than women. Nysveen et al (2005) claim that, women tend to have lower self-efficacy, lower computer aptitude and higher computer anxiety than men. In contrast Igabria (1993) claims that evidence concerning the effect of gender in the adoption of technology are equivocal. On the one hand some studies have found that there are no gender differences and on the other hand others studies have reported such differences. According to Dubai e-Governments “technology presents immense opportunities for women to make productive use of their talent without breaking the conventions of society. Dubai has initiated a number of programs on computer and internet literacy in cooperation with UNESCO. E-Government initiative was launched in 1999 by the Dubai Ruler to modernize the delivery of government services. The vision of e-Government initiative is to provide easy life to people and businesses in interacting with government and to establish Dubai as a leading business hub. High quality customer centric e-service was the primary objective of the Dubai government initiatives so that it would be able to achieve a fully functional virtual government. To achieve this goal the Dubai e-Government initiative concentrated on providing focused, specialized and simplified business process, emphasizing on rules and regulations, integrating information from various department, proactive marketing, high quality services with minimum waiting period, proving online help during the services, providing multiple innovative channels like web, mobile, telephone ( www.oecd.org, 2006).
Intercultural Communication and Relationship Marketing: A Conceptual Perspective
Dr. Phallapa Petison, Mahidol University, Thailand
Relationship marketing cannot be separated from communication as it is a substantial exchange of value, understanding, information, and knowledge from partners in a relationship. While the concept of relationship marketing has been increasing in recent years, the linkage of relationship marketing with other important aspects, such as communication and culture, is still not well developed. This conceptual paper critiques the intercultural communication fallacy in relationship marketing, and reviews and presents three cultural dimensions: power distance, collectivism vs. individualism, and uncertainty. The dimensions impact communication characteristics of style, strategies, frequency, flow, etc., which in turn impact quality of relationship marketing. The influence of communication on relationship marketing (RM) has received considerable attention in international business literature (Freeman and Browne, 2004). Some claim that RM has evolved as the result of interpersonal communication (Duck, 1998; Olkkonen, Tikkanen, and Aajoutsijarvi, 2000). Because all partners in a relationship need to understand, structure, and evaluate messages received between them, they inherently maintain RM. Furthermore, the communication process causes changes in the contextual and structural characteristics of the relationship, for example, how a relationship starts, develops, and declines. Similarly, culture is an important theoretical aspect of RM (Varner, 2000). Understanding and bridging cultural gaps between partners has been recognized as bringing about cooperation and providing a competitive advantage (Petison, 2010a; Wagner et al., 2002). However, there are very few literary studies on the link between the three important issues of communication, culture, and relationship marketing. Meanwhile, the globalization of business has made inter-cultural communication and relationship marketing (RM) even more important. Several scholars agree that communication is a fundamental aspect of RM (Andersen, 2001; Sprinks and Wells, 1997). Interpersonal communication is a key essential to forming relationships and networking. Today, many businesses operate globally and communication between people of different cultures is inevitable. It is therefore interesting to begin by distinguishing between “Intercultural” and “International” communication. “Communication crossing national boundaries-international communication is not necessarily different from any other communication activity. What is different is intercultural communication-communication activities among people of different cultures” (Wells and Sprinks, 1994; p. 302). Both scholars reflect that boundary is not the key, but cultural backgrounds are pivotal. Intercultural communication is therefore regarded as more difficult than general or domestic communication (Kameda, 2005) because culture involves subjective dimensions (value, beliefs, and attitude) and interactive dimensions (verbal and non verbal communication) (LaBahn and Harich, 1997). Burns, Myers, and Kakabadse (1995) find that the perception of people from different cultures begins at the early stages of relationship development. It may be an argument in developing RM, but it is dangerous to assume cultural stereotypes in communication between partners in a relationship. However, Freeman and Browne (2004) and Hofstede (1980) indicate that people from similar cultures still share certain cultural characteristics. Without generalization, intercultural communication impacts on RM would become more difficult that it already is (Varner, 2000). “Communication is the human activity that links people together and create srelationships” (Duncan and Moriarty, 1998: p.2). According to the most recognized communication model (Lasswell, 1984), communication process begins by a sender encoding a message to a receiver, who decodes the message and responds/feeds back to the sender. During this communication process, noise interferes with message transmission. Zinkhan et al. (1996) commented that Lasswell’s communication process model is a metaphor of the relationship marketing process, beginning with manufacturer interactions with distributors and customers, where noise refers to any interference preventing effective communication, such as cultural differences and language ability. Although communication is noted as centrally integrative in RM, some fallacies have occurred. The first fallacy is deviated from the meaning of intercultural communication. Extended from Lasswell’s model, Waterschoot and Van den Bulte (1992) believed that communication in RM is more like persuasion. However, it is argued that persuasion reflects one-way communication, whereas in RM, communication is two-way. This conceptual paper, although admiring the concept of Waterschoot and Van den Bulte (1992), critiques that in successfully managing relationship marketing today, intercultural communication is highly emphasized as two-way communication and not only to persuade buyers to purchase. This fallacy impacts the perceived role of partner as well as communication style.
A Better Way to Increase the Credibility in Quantitative Research
Dr. Hai-Ching Chang, National Cheng Kung University, Taiwan
Wei-Hao Chang, National Cheng Kung University, Taiwan
Most quantitative studies use statistical significance as the major index of null hypothesis significance test; comparisons to additional information are frequently deficient. The null hypothesis significant test (NHST) is a tool commonly applied in quantitative research. Most researchers use p-values to interpret their findings. As p-values are strongly affected by sample size, some scholars suggested using an effect size index when interpreting results. The American Psychological Association emphasizing the importance of effect sizes, strongly suggesting that studies should contain the data and other relative indices (i.e., confidence interval). This study discusses the impact of the effect size index on generating, reporting, and interpreting data. Finally, this work also discusses some methods of determining practical significance (i.e., confidence interval and researcher subjective value). In recent years, researchers have recognized the importance of effect size when planning studies. Fern and Monroe (1996) identified three recent trends in marketing and consumer research that justify examining the issues surrounding use of effect size: (1) dissatisfaction with the null hypothesis significance test (NHST)); (2) acceptance of meta-analysis for synthesizing knowledge in studies; and, (3) increasing use of statistical power analysis. The American Psychological Association Publication Manual (1994, fourth edition) (APA publication manual), which underwent a substantial revision, encourages (p.18) authors to include, in addition to statistical significance, the value of effect size in their work. Vacha-Haase, Nilsson, Reetz, Lance, and Thompson (2000) examined two journals over 10 years, “Psychology and Aging” and “Journal of Counseling Psychology,” and that only 10% of studies utilized “statistical significance” rather than effect size; roughly 45% reported effect size and did not provide an interpretation. Such a low proportion indicated that the APA, simply through encouragement, has not inspired authors to interpret effect size. Currently, an increasing number of journal editors require contributors summiting quantitative research to provide effect sizes indices, as noted by APA: “Always provide some effect size estimate when reporting a p-value” (Wilkinson and APA Task Force on Statistical Inference, 1999, p. 599). The fifth edition of APA (2001) stated: “For the reader to fully understand the importance of your findings, it is almost always necessary to include some indices of effect size or strength of relationship in your results section … The general principle to be followed, however, is to provide the reader not only with information about statistical significance but also with enough information to assess the magnitude of the observed effect or relationship.” (pp. 25–26). Thus, the fifth edition of APA underlines the importance of including effect size in results sections. In addition to this explanation, the APA stated that the following four indices should be included in a report: 1. p-value of statistical significance. 2. Confidence interval (CI). 3. The interpretation of figures and tables, including standard deviation, means and sample sizes. 4. The value of effect size; providing effect size with CI simultaneously is preferred. Based on these literature findings, the primary goal of this work is to clarify the importance of effect size in the quantitative research, and second, to elucidate interpretation of effect size, measurements and its use. Moreover, an example is utilized to illustrate the problem when a researcher only reports statistical significance and ignores the effect size. In inferential statistics, the statistical method most frequently used is the NHST, or so call significance test. Significance shows the possibility of truth, such that when a researcher claims that the research result is highly significant, they mean it is very probably true and one can be very confident some difference or association exists between factors, but does not (necessarily) indicate that the difference between variables is very large or that the association between two variables is very strong, or even very important. Consequently, the results merely express how sure the researcher is a difference or relationship exists. The significance of a result is measured by its p-value, if the p-value is smaller than a specific threshold, mostly choose α=0.05 as the level of significance, then the researchers will obtain statistical significance. Rosnow and Rosenthal (1996) and Thompson (2002) note that statistical significance differs from practical significance and importance. Because this is an investigation limited to the social sciences, obtaining consensus is difficult, consequently researchers primarily utilize sampling. However, sampling is a process of selecting units (e.g., people, organizations, etc.) from a specific population such that by examining the sample, the results can be generalized back to the population from which study subjects were chosen. Therefore, the problem of sampling error is very difficult to eliminate.
The Moderating Effect of Locus of Control on Customer Orientation and Job Performance of Salespeople
Dr. Hsinkuang Chi, Nanhua University, Taiwan
Dr. Hueryren Yeh, Shih Chien University, Kaohsiung, Taiwan
Yuling Chen, Nanhua University, Taiwan
Service industry has been growing fast recently in Taiwan. Its total output value and number of employees exceed production industry and become the biggest industry in Taiwan. Therefore, in order to increase competitiveness, salespeople need not only to provide the best service to customers but also pay more attention to customer satisfaction. However, because of different personality traits, salespeople will behave differently. Thus, customers may perceive service difference from salespeople, and it will affect customer purchase decision. This study uses customer orientation as the antecedent variable, job performance as the dependent variable, locus of control as the moderator to examine where there is a moderating effect of locus of control on customer orientation and job performance of salespeople. The samples are collected from salespersons who work in insurance sales, car sales, direct sales, retail sales, department store counter, and real estate brokers. Total 420 copies of questionnaires were dispatched and 339 copies were collected. The response rate is 74%. The study finds that customer orientation and internal locus of control are positively and significantly affected to job performance. Moreover, internal locus of control has no moderation effect on customer orientation and job performance but external locus of control has a moderating effect on customer orientation and job performance. In the recent year, service industry has been as one of developmental symbols for the industrialized countries. Taiwanese companies are no longer dependent on manufacturing and original equipment manufacturer (OEM) as a major way to make profits. According to Taiwan manpower statistics in 2008, the number of employed persons in service industry is growing up to more than 6 million. Service industry occupies 58.02% of total employment population. Its gross output and number of employed persons exceed manufacturing industry and become the biggest industry in Taiwan. However, both service industry and manufacturing industry are encountering with severe challenges from global competition. Therefore, how to overcome challenges and increase salespersons’ performance becomes an important issue for many companies. In order to increase competitiveness, salespeople are required to provide the best customer service and pay more attention to customer satisfaction. Kelly (1992) indicates that the characteristics of service include intangibility, heterogeneity, and inseparability. These characteristics make the processes of service as a critical factor to enhance customer perceived quality, and they will directly affect customer purchase and repurchase behavior. In addition, customer orientation usually reflects a salesperson’s confidence to satisfy customers’ need and willingness and motivation to interact and serve customers (Brown, Mowen, Donavan & Jane, 2002). Customer orientation is a key profitability factor to a business (Deshpande, Farley, & Webster, 1993; Dunlap, Dotson, & Chambers, 1988). A high customer oriented salesperson will make his or her best effort to make a customer satisfy. Customer orientation is also a business’s goal in the marketing management practice (Narver & Slater, 1990; Ganesan, 1994). Allport (1961) proposes that personality is a dynamic organization in an individual’s psychological system and a unique trait which can decide a person’s thinking and behavior. Lee (2004) maintains that each salesperson behave differently because of different personality traits. Thus, customers may perceive service quality difference from salespeople, and this will affect customer interaction and customer purchase decision (Huang, 2004; Chen, 2006; Cho, 2008). From above, one can realize that customer orientation and personality traits are not single-oriented relationship. They can help people to understand and explain the variance of job performance and salespersons’ performance. Accordingly, this study aims to explore the influence of salespeople’s customer orientation and locus of control (personality traits) on job performance study and the effect of salespeople’s customer orientation on job performance, under different personality traits. In the meantime, this study is to test whether personality traits are the moderator between customer orientation and job performance. Kotler (1980) suggests that customer orientation can be regarded as the application of marketing concepts in salespersons’ service and customer interaction. A salesperson needs to meet with customers’ expectation and requirement. A professional salesperson should not doubt what I am going to sell to a customer. Instead, he or she has to ask what the best way to solve a customer’s question. Saxe and Weitz (1982) pinpoint that customer orientation is a salesperson to perform marketing concepts in the process of the customer interaction. That is, when an exchange between salespeople and customers occurs, salespeople can utilize marketing concepts to help customers to make a satisfaction decision. Narver and Slater (1990) further illustrate that customer orientation is a chain value to understand customers, and it is a marketing concept to put customers’ benefits on top priority. Customer orientation implies a salesperson’s tendency and motivation to pamper a customer, to read a customer’s needs, to develop an interaction relationship with a customer, and to deliver service required to a customer (Donavan, Brown, & Mowen, 2004).
Wireless Sensor Networks (WSN) in Disaster Management: Coverage Optimization
Nor Azlina Ab. Aziz, Multimedia University, Malaysia
Kamarulzaman Ab. Aziz, Multimedia University, Malaysia
Global climate change is increasing the occurrence of extreme climate phenomenon with increasing severity, both in terms of human casualty as well as economic losses. Authorities need to be better equipped to face these global truths. This paper proposed a technological solution for search and rescue operations using WSN. A major issue for WSN is optimizing the coverage. In this paper the coverage problem, which is caused by limited sensing range and number of sensors is addressed. The proposed algorithm use both Particle Swarm Optimization (PSO) and Voronoi diagram. From the simulation results it can be concluded that the proposed algorithm works better when the number of sensors is high, while the ROI is small or when the ROI is large while the number of sensors is low. A wireless sensor network (WSN) is a group of low cost, low power, multifunctional and small size wireless sensor nodes that cooperate together to sense the environment, process the data and communicate wirelessly over a short distance . The sensors are commonly used to monitor physical or environment conditions, such as temperature, sound, vibration, pressure, motion or pollutants, at areas of interest . Some of these sensor nodes are able to move on their own, this is achieved by mounting the sensors on mobile platforms such as Robomote . The development of wireless sensor network was originally motivated by military application such as battlefield surveillance. However, wireless sensor networks are now used in many industrial and civilian application areas, including industrial process monitoring and control, machine health monitoring, environment and habitat monitoring, disaster management, healthcare applications, home automation and traffic control [2 & 4]. Among the challenges faced by WSN is to provide autonomous deployment. This is desirable in search and rescue, where some of the disaster areas to be monitored are difficult or impossible to be accessed by human for example steep terrains or a room contaminated with hazardous gases. Studies over the recent years have gathered evidences indicating that the global climate is changing. The changes include the occurrence of extreme climate phenomenon that may have disastrous consequences to us human. The Intergovernmental Panel on Climate Change (IPCC) identified a number of extreme climate phenomenon with high level of likelihood to occur ; The effects of the global climate change are clearly felt by many around the globe. Thus there is the need for disaster management plans with more comprehensive regional and local risk reduction strategies.  The extreme climates affect higher numbers of people with increasing levels of life threatening damages. Thus, it is important for authorities to ensure their disaster management plans include effective search and rescue technologies. A number of studies had shown the applicability of sensor networks for functions suitable for search and rescue operations. [7, 8, 9, 10, 11] This paper believes that wireless sensor networks can be a very effective technological solution for a search and rescue strategy. Consider a disaster site, an unknown building, a dense tropical jungle, an inaccessible mountainous area, a remote flooded region with destroyed roads and bridges, etc. The most pressing search and rescue objective would be to locate the victims accurately and timely. Once the disaster area had been identified a wireless sensor network can be deployed to scan the area for any victims needing rescue. Each sensor is commonly equipped with some memory capacity, processing abilities, s number of sensing modes and communication functions. For example, the Mica Mote sensor  is able to monitor temperature, light and movement. The Robomote  combination adds mobility to the range of capabilities of the wireless sensor network. The deployed WSN will scan the disaster area locate the victims via the numerous sensing modes the WSN can be equipped with, and provide the search and rescue team with the identified locations of the victims needing rescue. The WSN can also provide the team with crucial information such as the terrain of the disaster site, hazards that they need to overcome and avoid, etc. Thus, the search and rescue team will be able to plan their operation with higher level of precision, timeliness and safety for both the victims and the team members. However these wireless sensors have several constraints such as restricted sensing and communication range as well as limited battery capacity . The limitations bring issues such as coverage, connectivity, network lifetime, scheduling and data aggregation. In order to prolong the WSN lifetime, energy conservation measures must be taken, scheduling and data aggregation are among the commonly used methods. Scheduling conserves energy by turning off the sensor whenever possible, while data aggregation try to conserve energy by reducing the energy used in data transmission. Connectivity and coverage problems are caused by the limited communication and sensing range. To solve both problems the solution lays in how the sensors are positioned with respect to each others. Coverage problem is regarding how to ensure that each of the points in the region to be monitored is covered by the sensors. It is a unique problem, in maximizing coverage the sensors need to be placed not too close to each other so that the sensing capability of the network is fully utilized and at the same time they must not be located too far from each other to avoid the formation of coverage holes (area outside sensing range of sensors). On the other hand from connectivity point of view, the sensors need to be placed close enough so that they are within each other communication range thus connectivity is ensured. This paper studies the works done in solving the coverage problem. Overall the body of literature reviewed can be grouped into three different focus directions; force based [12, 13, 14], grid based [15, 16, 17, 18] and computational geometry based [19, 20, 21, 22]. Force based method use attraction and repulsion forces to determine the optimal position of the sensors while grid points are used for the same objective in grid based method. As for the computational geometry approach, Voronoi diagram and Delaunay triangulation are commonly used in WSN coverage optimization algorithm. In this research the coverage problem, which is caused by limited sensing range and number of sensors will be tackled using Particle Swarm Optimization (PSO) and Voronoi diagram. The rest of this paper is organized as the following; Section 3 will briefly introduce Particle Swarm Optimization and Voronoi Diagram, the coverage problem will be discussed in section 4, and in section 5 the proposed algorithm will be introduced. The simulation results and discussion will be presented in section 6. The paper is concluded and future path of our work is discussed in section 7.
The Capital Structure Choice of Listed Firms on Two Stock Markets and One Country
Dr. Khaldoun Al-Qaisi, Amman Arab University for Graduate Studies, Amman, Jordan
The financing choice of listed firms has been the subject of some intense research effort. At the forefront of this effort is the issue of capital structure. Indeed, the capital structure of firms affect their cost of capital and hence, their investment decisions. This paper examines the capital structure of non-financial firms operating in the United Arab Emirates (UAE). In more specific terms, the fact that the UAE has two stock markets, the paper examines the determinants of leverage of listed firms on the Abu Dhabi Securities Exchange (ADSE) and the Dubai Financial Market (DFM). Based on the time period 2004 – 2008, the reported results show that firms listed on the ADSE, and DFM have low leverage ratios. In addition, while the panel data analysis indicates the existence of some differences in the signs and magnitudes of the coefficients of the determinants of leverage, as expected, these differences are due to firm – specific factors rather than country – specific factors. It is common knowledge that the behavior of corporations in the generation and allocation of scarce resources is of vital importance. Moreover, the fact that the mix of funds (leverage ratio) affects the cost and availability of capital and thus, firms’ investment decisions, the issue of the capital structure choice has long been an issue of great interest in the corporate finance literature. Indeed, the literature examines the determinants of capital structure, and its impact on firm performance. The publication of Modigliani and Miller’s (1958) seminal paper in which it is illustrated, under a number of restrictive assumptions, that the value of a company is independent from its financial structure, has encouraged financial economists to come up with conditions under which an optimal capital structure would matter. This effort led to the formulation of a number of theories based on tax considerations, bankruptcy costs, agency costs, and symmetric information issues. However, none of these theories provide us with an exact formula for calculating an “optimal” financing policy. In actual fact, what we have is numerous papers which examine the determinants of the capital structure choice of companies. In more specific terms, the literature contains many papers which examine the determinants of the capital structure choice based on a number of factors including the assets structure (fixed assets to total assets), firm profitability, firm age, ownership structure of firms, and others. Whilst it is probably impossible to review this literature, some of the main papers which examine the capital structure of firms operating in the USA, Europe, Japan, China, and Korea include Titman and Wessels (1988), Harris and Raviv (1991), Rajan and Zingales (1995), Bevan and Danbolt (2000), Desai et al. (2003), Altshuler and Grubert (2003), Mintz and Weichenrieder (2004), Voulgaris et al. (2004), Daskalakis and Psillaki (2007), Huizinga et al. (2007), Mefteh and Oliver (2007), Antoniou et al. (2005), Cai et al. (2008), Serrasqueiro and Rogao (2009), and many others. Following the pioneering works by Singh (1995), a growing number of papers examine the capital structure choice in developing and transition economies. Indeed, this work is important given the fact that capital markets in developing countries are less sophisticated than those which prevail in advanced countries. In addition, it is known that information asymmetry is more prevalent in developing countries. In other words, these observations (less developed capital markets and information asymmetry) might have an impact on the leverage ratios of firms which operate in developing countries. On average, it is reported that firms operating in developing and transition countries have relatively lower leverage ratios. In addition, this literature reports the applicability of well-known determinants of the capital structure to these markets. Some of these works include Booth et al. (2001), Mutenheri and Green (2002), Huang and Song (2002), Shah and Hijazi (2004), Deesomsak et al. (2005), Guha-Khasnobis and Kar (2006), Shah and Khan (2007), Abor (2008), Eldomiaty (2007), Salawu and Agboola (2008), Qureshi (2009), and others. As far as Arab stock markets are concerned, based on the time period 1996 – 2001, Omet and Mashharawe (2003) examined the capital structure choice of listed non-financial firms in Jordan, Kuwait, Oman and Saudi Arabia. The results indicate that these companies have low leverage ratios and extremely low long term debt in their respective capital structures (1). In addition, Omet (2006) report that the ownership structure of listed Jordanian firms has no significant impact on their leverage. The primary objective of this paper is to examine the capital structure of firms listed on the ADSE, and the DFM. This interest is based on the fact that firms listed on these two markets operate in the same country; the UAE. In other words, this paper examines whether national or firm – specific differences have an effect on the capital structure of listed non-financial firms in Abu Dhabi and Dubai.
Outcomes of Referral and Non-Referral Hiring
Imran A. Shahzad, Lecturer (Management Sciences), Foundation University, Islamabad, Pakistan
This paper analyzes the impact of referral and non referral hiring on employee job satisfaction, employee productivity and employee turn over intentions. In Pakistan referral hiring has dual nature; in the public sector nature of referral hiring has negative undertone, while in the case of private sector especially in transnational organizations it is supposed to be a healthy process and widely used practice when urgent need of an employee is mandatory. In this study sample consisted of 100 employees; selected from OGDCL head office Islamabad; which is known as the primary source of business in the filed of Oil and Gas in Government sector of Pakistan (public sector). It is concluded that referral hiring plays significant role in employee job satisfaction and employee productivity as compared to non referrals. As referral hiring increases, it decreases employee turnover intentions and as non-referral hiring increases, it decreases employee productivity. Recruitment is a process of searching for prospective employees and stimulating them to apply for the job in the organization. The sources of recruitment are categorized in to broader categories i.e. formal hiring (non-referrals) and informal hiring (referral) sources. Formal hiring sources include advertisement, internet, job fairs, and recruitment agency whereas personal contacts and professional contacts are informal sources of hiring. Several studies have found that inaccurate information at entry results in unmet expectations during encounter phase of socialization. Applicants recruited through informal sources of recruitment reported receiving more accurate information about the job from their reference source, and had more realistic expectations than applicants recruited by formal sources (Blau, 1990; Breaugh and Mann, 1984; Quaglieri, 1982). Organizations recruit, select and induct employees from different sources. Two famous and frequently used sources are termed as referral and non referral sourcing of hiring. As there is no previous studies available in the context of hiring in Pakistan, specifically in oil and gas industries. HR professionals are not yet clear that which source of hiring can yield more job satisfaction and less turnover intentions. Primary aim of the study was to identify the best source of hiring and its impact on employee job satisfaction level, employee productivity and employee turnover intentions. Secondary aims of thee study were to differentiate the pros and cons of referral and non referral hiring for the organizations. Thirdly, it was aimed to guide industry that which way of hiring is more beneficial. Referral hiring is the process that is mainly based on employer’s social networks. Relying on referrals has the consequence of expanding the firm's recruiting horizon and tapping into pools of applicants who would not otherwise apply. Kirnan, Farley and Geisunger (1989) suggest that the pool of referred applicants is more qualified and more readily hirable than non-referral applicants. Costs of new hires also vary among sources because it is highly normative to pay for some sources (e.g., formal newspaper ads) but not necessarily normative to pay for other sources (e.g., informal employee referrals). Rafaeli & Oliver (1998) emphasized that the average cost of hiring through non-referrals i.e. advertising was significantly higher than hiring through employee referrals. This is one of the reasons that referral hiring is the preferred source, especially in the small organization (Tanova C., 2003). In informal recruitment i.e. referral hiring, applicants obtain more and better information about the organizational culture so individuals who do not fit are less likely to apply (Granovetter and Mark, 1981; Chatman, Bell & Staw, 1986; Wanous, 1992; Wanous & Collela, 1989). The term “Job Satisfaction” refers to the attitudes and feelings people have about their work. Positive and favorable attitudes towards the job; indicate job satisfaction. Negative and unfavorable attitudes towards the job; indicate job dissatisfaction. Morale is also defined as “being equivalent to job satisfaction” highlighted by Michael (2006). John (2004) highlighted that for most employees, money is not the prime motivator in employment. Job satisfaction may comprises of a number of factors depending upon employees’ personal needs, wants and urgencies as Michael (2006) Workplace Employee Relations Survey (WERS, 2005) calculated Job satisfaction can be in terms of sense of achievement, scope for using initiative, influence over job, training, higher pay, job security, work itself, involvement in decision making, degree of social interaction, control over work pace, promotions and the level of job satisfaction may depend more on the individual or group needs, expectations from the organization and quality of work life. People hired through contacts are more satisfied with their job thus stay in the organization for long time and quit less frequently leave the workplace. (Datcher L. 1983).
The Effect of Pre 2007-08 Imported Crude Oil Price Increases on U.S. Economy
Dr. Hussein Zeaiter, Lebanese American University, Beirut, Lebanon
For decades, oil prices have been an important key player in the US economy as well as other economies. Most of oil price shocks were mainly caused by disturbance of supply. This study examines the asymmetric effect of foreign oil price increases that were supply shocks basis, on various U.S. economic activities. Using the imported oil price variable addresses the question of the impact on the U.S. economy of exogenous changes in nondomestic oil prices, which usually reflected events outside the United States. Using VAR, unit root tests, Granger causality tests, variance decomposition tests and impulse response functions are used to evaluate the impact of oil price shocks on the macroeconomy. The period chosen for the econometric model analysis of my study is 1948:II to 2000:IV with quarterly data. This paper concludes that imported oil prices have greater effect on the U.S. economy than domestic oil prices. Also, non-oil import prices play an important role on the output growth and on the other studied economics components. Since 1973, the major oil price changes have attracted many economists and policy makers to study the relationship of oil prices on various economic components. Historically, most of oil price shocks were mainly caused by disturbance of supply, whereas heavy demand was the main cause of the price increase in 2007-08. This study discusses the influence of oil price increases caused by supply shocks on many aspects of US economy that occur prior to 2007-08 oil price increase. Supply shocks are disturbances to the economy, whose first impact is a rise in the price level which causes an increase in the unemployment rate and a reduction in real output which, can lead the whole economy into a recession. Before 1970s, oil price increase was not assumed a main factor of economic recessions. In the 1970s, there was a remarkable change in this concept when two major oil shocks increased the cost of production, which had a major impact on the economists and policy makers and caused them to think and reevaluate the old concepts. The first shock in 1973-1974 increased the real price oil leading the economy into a big recession in 1973-1975. The second price increase, in 1979-1980, doubled the price of oil and sharply accelerated inflation. The high inflation led, in 1980-1982, to a tight monetary policy to fight the inflation, with the result that the economy went into deeper recession than in the 1973-1975. After 1982 the relative price of oil fell throughout most of the 1980s, with a particularly sharp decline in 1985-1986. There was a brief oil price shock in the second half of the 1990, as a result of the Iraqi invasion of Kuwait. That temporary shock helped worsen the recession of 1990-1991, though recession is dated as having begun in July, before Kuwait was invaded. The two oil price shock-related recessions of the 1970s leave little doubt that supply shocks matter. The brutal effect of oil embargo on the US economy in the 1970s made many economists review and analyze the previous recessions, especially in the post-war period, to see which of them might be caused either by increasing oil prices or by tightening in the monetary policy. Thus, examining the effects of increasing oil prices and tightening monetary policy on the aggregate output has become a goal and the target of many economists. Economists have found that seven out of eight recessions, prior to 2007-08 price increase, were accompanied with an increase in oil prices. This paper is organized as follows: Section II reviews many studies on the impact of oil price shocks. Section III develops the empirical study using VAR, unit root tests, Granger causality tests, variance decomposition tests and impulse response functions to evaluate the impact of oil price shocks on the macroeconomy. Section IV concludes the paper. Sections V and VI defines the data used and reports the references, respectively. Using domestic oil price, Gisser and Goodwin (1986), Lee, Ni, and Ratti (1995), Hamilton (1996, 2000, and 2001), Hooker (1996), and Bernanke, Gertler, and Watson, BGW (1997), all found an asymmetry effect on the U.S. economy. The same result is reached by Mork (1989) who used the refiners’ acquisition cost which includes price of oil imports. The general approach for these studies uses the VAR model developed by Sims (1982). Hamilton (1983) examined the correlation between oil price shocks and US recessions in the post-World War II period. He claimed that seven of eight post-war recessions were preceded by an increase in the price of crude oil. The author concluded that there is little evidence to support the claim that the correlation between oil prices and output represents a statistical coincidence prior to 1973.
Tourism Destination Attractiveness: The Mediating Effect of Destination Support Services
Dr. Sebastian Vengesayi, University of Tasmania, Hobart, Australia
This paper examines the influence of tourist attractions, destination support services and people related factors on the attractiveness of a tourism destination. A sample consists of 275 tourists visiting major tourism destinations. Through Structured Equation Modeling, the study investigates the association between destination attractions and destination attractiveness, and how this relationship is mediated by destination supporting services. Destination supporting services were found to have a direct effect on destination attractiveness, contrary to literature that position destination supporting services as complimentary to destination attractions. The study of destination attractiveness are limited (Formica, 2002) in that attempts to measure or assess the attractiveness of tourism destinations are ad hoc and therefore of little use to most stakeholders at these destinations. The existing destination studies seek to identify the most popular destination attributes, that is, attractions and activities within a destination that are popular among tourists. Little attempt is however made to highlight, the relationship between destination attributes and destination attractiveness. Destination attractions are said to be the primary determinants of destination attractiveness, hence destination attractions are are directly related to destination attractiveness. Literature further suggest that destination support services play an indirect role to destination attractiveness by playing a complimentary role to destination attractions in the relationship with destination attractiveness. This study empirically investigates the relationship between destination attributes, destination support services and destination attractiveness. The mediating role of destination supporting services in the relationship is investigated. The attractiveness of a tourism destination is often referred to the opinions of visitors about the destination’s perceived ability to satisfy their needs. Research has shown that attractiveness studies are necessary for understanding the elements that encourage people to travel (Formica, 2002). The more a destination is able to meet the needs of tourists, the more the destination is perceived to be attractive and the more the destination is likely to be chosen in preference to competing destinations. Thus, the major value of destination attractiveness is the pulling effect attractiveness has on tourists (Kim & Lee, 2002). Mayo and Jarvis (Becker)1981) define destination attractiveness as, “the relative importance of individual benefits and the perceived ability of the destination to deliver these individual benefits” (p.201). This ability is enhanced by the specific attributes of a destination that makeup the destination. A tourism destination is therefore a combination of destination attributes, mostly including tourist facilities and services (Hu & Ritchie, 1993). In an assessment of the attractiveness of a destination tourists evaluate the perceived ability of the destination attributes to meet their needs (Mayo & Jarvis, 1981). The attractiveness of a destination diminishes in the absence of these attributes. Moreover, in the absence of destination attractiveness tourism would not exist and there could be little or no need for tourist facilities and services (Kim & Lee, 2002). A number of studies identify the attributes that tourists consider as important in evaluating the attractiveness of a destination (Gearing, Swart, & Var, 1974; H.-b. Kim, 1998; Meinung, 1995). For example, Middleton (1989) examines three attributes of destination attractiveness: facilities, prices of venues and transport networks. Gartner (1989) identifies several other attributes of destination attractiveness, including historic and cultural sites, nightlife, liquor, outdoor life, natural environment and receptiveness among others. Meinung (1995) argues that scenery is one of the most important attributes in attracting tourists, while cultural attributes are growing in importance in the global demand for tourism. In a study of Korean destinations, Kim (1998) lists several other factors affecting the attractiveness of a destination. These are clean and peaceful environment, quality of accommodation facilities, family-oriented amenities, safety, accessibility, reputation, entertainment and recreational opportunities. A review of literature suggests that destinations are multi-attribute and thus the identification of the various categories of these attributes becomes important. Further, the identification of the core destination attributes should be a priority for destination researchers, given the need for destination managers and marketers to allocate scarce developmental resources. Many researchers have categorized destination attributes into groups (Ferrario, 1979; Leiper, 1990; Lew, 1987; Ritchie & Zins, 1978). The grouping of destination attributes – predictors of destination attractiveness - has its roots in the study by Ferrario (1979). According to Ferrario (1979), for a destination to be attractive there should be something very special within it (that is an attraction). Thus attractions represent the first important group or category of destination attractiveness. This assertion is supported by Crouch and Ritchie (1999) who note that attractions are the primary factors that pull people to visit a destination and thus destination attractions are the main factors of destination attractiveness. In order for tourism to flourish there should be attractions within a destination; other attributes are complimentary. The second group of destination attributes that predict its attractiveness is represented by destination support services and facilities. According to Dwyer et al. (2003) and Ritchie and Crouch (2000), destination support services and facilities play a complimentary role in predicting the success of a destination. However, without attractions within a destination, support services become irrelevant. The third group of destination attractiveness predictors includes people related factors. This group also plays a complimentary role to destination attractions. On their own, people related factors are not useful; they require the existence of attractions and support facilities and services to which people can add value.
An Empirical Analysis of Stock Pricing, Systematic Risks and Returns on the Ghana Stock Exchange
Joseph Magnus Frimpong, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
The study uses the Capital Asset Pricing Model (CAPM) and regression analysis to examine relationships among beta coefficients, stock prices and returns of over 50% of total stocks on the Ghana Stock Exchange (GSE). The Security Market Line for each of the ten year period was scrutinized to discern behavioural trends among aggressive stocks, defensive stocks and the GSE All-Share Index. The study showed that defensive stocks with betas close to one are usually fairly priced according to the CAPM criteria. Aggressive stocks with beta coefficients in excess of one and very low defensive stocks with beta coefficients close to zero are both likely to be mispriced but the distinguishing feature is that once mispriced, the former stocks tend to be overpriced whilst the latter stocks tend to be underpriced. The study also reveals quite a strong positive relationship between systematic risk and the returns on stocks but concludes that highly priced stocks on the Ghana Stock Exchange have no influence on the size of returns paid to investors. The desire for a formidable and a reliable source of business finance makes stock exchange an essential financial institution in an economy. Despite the enormous contributions a stock exchange offers to business operations, its impact has not gone on well with many developing countries. Investors expect to be compensated for the risk of investing their funds through the buying of stocks of listed companies. According to the portfolio theory (Markowitz, 1952), investors require a higher return from the market portfolio than from risk free return investments. Quoting fair prices for the securities to eliminate mispricing has become a challenge to the business world. Several works have explored the determinants of stock prices on the stock market. What distinguishes this work is that it dichotomizes the risk-return relationships between aggressive stocks and defensive stocks and investigates whether investors in each category are adequately compensated for the risk they take on Ghana Stock Exchange. Another challenge the study investigates is the relationship that exists between price of a security and return on the Ghana Stock Exchange. Using the Capital Asset Pricing Model (CAPM) for analysis, this study classifies stocks according to their beta ranking and attempts to find out how fair the stocks of companies listed on the Ghana Stock Exchange are priced to reflect their beta coefficients. It also examines the relationship between stock price and return of companies listed on the Ghana Stock Exchange. The majority of research on financial markets considers those in developed economies, which are relatively efficient enough and do not suffer from the total inefficiency problems in less developed countries. The subject of financial markets in developing countries still needs lengthy analysis and more research attention. Therefore, the importance of this study stems from it being an empirical attempt to examine a financial market in a developing country like Ghana. The capital raised by a corporation through the issue of stock entitles holders to an ownership interest. A stock is defined as a security that represents ownership in a publicly traded company. It can also be defined as ownership right in a corporation which can be bought and sold. Stock prices are fundamentally determined by demand and supply. In stock market terminology, demand refers to bids and supply refers to offers. All other things being unchanged, the price of a stock will go up if bids for the stock exceed offers. Similarly, a stock's price is most likely to fall if offers exceed bids for it. Bids and offers normally change in response to changing expectations of the investing public. Other factors behind demand and supply that affect stock prices are interest rates, expectations of the market, the economy, performance of the listed company, major news from the listed company and investor psychology. Shiller (2000) has argued that stock prices in the 1990s showed the classic features of a speculative bubble. High prices were sustained temporarily by investor enthusiasm rather than real fundamental factors. Investors, according to Shiller (2000), believe it is safe to buy stocks, not because of their intrinsic value or because of expected future dividend payments, but because they can be sold to someone else at a higher price. Simply put, stock prices are driven by a self-fulfilling prophecy based on similar beliefs of a large cross section of investors. In a financial market, investments differ widely in their risk and return characteristics. Bank savings account offers returns immediately and has little risk while others such as shares may not offer immediate returns and have sustainable risk. In order to make sound investment decisions, it is important to have the ability to evaluate the return and risk of various investment alternatives (Mensah, 2008).
Integrate Active and Passive Strategy in Portfolio Construction
Dr. William Yao Chun Tsao, Cheng Shiu University, Kaohsiung County, Taiwan
Dr. Wen Kuei Chen, I Shou University, Kaohsiung County, Taiwan
This study integrates Third Degree Stochastic Dominance (TSD) and Cumulative Prospect Theory (CPT) approaches to construct CPT-TSD portfolios. Institution investors could use this method to construct their portfolio with advantages of easy-supervised numbers and trading cost-down. To our best knowledge, this paper is the first attempt to integrate TSD method into CPT foundation in portfolio practice. Ideally, the active manager exploits market inefficiencies by purchasing securities that are undervalued. Depending on the goals of the specific investment portfolio, hedge fund or mutual fund, active management may also serve to create less volatility (or risk) than the benchmark index. The reduction of risk may be instead of, or in addition to, the goal of creating an investment return greater than the benchmark. The effectiveness of an actively-managed investment portfolio obviously depends on the skill of the manager and research staff. In reality, the majority of actively managed collective investment schemes rarely outperform their index counterparts over an extended period of time, assuming that they are benchmarked correctly. The primary attraction of active management is that it allows selection of a variety of investments instead of investing in the market as a whole. The most obvious disadvantage of active management is that the fund manager may make bad investment choices or follow an unsound theory in managing the portfolio. Passive management is a financial strategy in which a fund manager makes as few portfolio decisions as possible, in order to minimize transaction costs, including the incidence of capital gains tax. One popular method is to mimic the performance of an externally specified index. The concept of passive management is counterintuitive to many investors. Actively managed mutual funds must strive to overcome this cost disadvantage by assiduously searching for and identifying investment opportunities that have the potential to generate above-average earnings and price appreciation. This highly competitive and daunting task is sufficiently demanding that the majority of equity funds have been unable to provide long-term performance superiority in comparison with the broad market. Moreover, those funds that dominate market averages in a specific time frame are typically unable to sustain outsized performance momentum in subsequent years. Hence the familiar disclaimer in a mutual fund prospectus is well known that "Past performance is no guarantee of future results." To conquer the disadvantages of active style and passive style management, we try to integrate active operation into passive portfolio so as to downsize portfolio numbers. Traditional technical apparatus in portfolio treatment has been Markowitz’s (1952, 1959) Mean-Variance (MV) framework. The reliability of performance comparisons using MV criterion depends on the degree of non-normality of the returns and the non-quadratic utility function (Fung and Hsieh, 1999). In many circumstances these assumptions appear questionable (See Markowitz, 1952, 1959; Sharpe, 1964). When the assumptions of MV do not hold, the Stochastic Dominance (SD) efficiency criteria offer the most immediate extension (Bawa, 1982; Levy, 1992). For the argument between active and passive portfolio management, the purpose of our empirical exploration presented herein is to conduct an investigation of the efficacy of SD theory in deducing an efficient subset of ETFs. The aim of this study is to provide superior filter as opposed to buy-and-hold strategy as traditional ETFs persist. This paper enables investors to make choices about allocations that concerns more about the macro issue beyond the narrow micro-level study. This was one of the important topics that De Bondt, Shefrin and Staikouras (2008) suggested for future research in behavioral finance. An exchange-traded fund (or ETF) is an investment vehicle traded on stock exchanges, much like stocks. This is a kind of fund in name, but it is an investment instrument tracking the performance of specific indexes in the exchanges. This instrument is listed on the Stock Exchange for trading. ETF replicates index stocks and it is a kind of passive management style. Also, it tracks the target indexes through replication or sampling. When the components or proportion of the linked index changes, the manager will adjust the portfolio of the ETF by adjusting the component stocks or weight. Adjustment is usually made at regular intervals, not as contrasted with the frequent adjustment of funds. Therefore, it has the advantage of having the similar return as the indexes. ETF does not require a fund manager and a research team to manage the portfolio. Therefore, the management fee is relatively low. ETF can track the complete portfolio of the target index, thus is efficient in diversifying its investment. The investment portfolio of ETF is identical to the portfolio of the index. Investments are transparent and unlikely to be affected by human factors. ETFs may be attractive as investments because of their low costs, tax efficiency, and stock-like features. An ETF combines the valuation feature of a mutual fund or unit investment trust, which can be purchased or redeemed at the end of each trading day for its net asset value, with the tradability feature of a closed-end fund, which trades throughout the trading day at prices that may be substantially more or less than its net asset value. ETFs have been available in the US since 1993 and in Europe since 1999. Existing ETFs have transparent portfolios, so institutional investors will know exactly what portfolio assets they must assemble if they wish to purchase a creation unit, and the exchange disseminates the updated net asset value of the shares throughout the trading day. Because ETFs can be economically acquired, held, and disposed of, some investors invest in ETF shares as a long-term investment for asset allocation purposes, while other investors trade ETF shares frequently to implement market timing investment strategies. Most ETFs are index funds that hold securities and attempt to replicate the performance of a stock market index. An index fund seeks to track the performance of an index by holding in its portfolio either the contents of the index or a representative sample of the securities in the index.
Graphical Programming: A Business Student’s Tour
Dr. Jerry Chin, Missouri State University, MO
Mary H. Chin, Missouri State University, MO
Sam Student and his fellow students approach a problem after realizing there was little team knowledge about the original problem as presented. They realize that it is the future expectation of their first employers that they will be required to learn the business or new skills when required by the problem-solution environment. Their problem boils down to naïve string search problem, searching character by character from starting beginning to string end. This paper follows Sam Student through the solution process of the Transform Functional Form Algorithm problems. This paper demonstrates solving a logic problem via graphical programming. Moreover, the logic problem is reduced a more familiar puzzle to the student: string searches and replacement. The problem is first defined and then reduced a more manageable task with computer graphical programming. In summary, we see analysis, problem deconstruction, and a interesting solution. Sam Student has been an MBA student for a year and has registered for a new IT class that integrates business and IT. This week his team, composed of a computer science student, a liberal arts major and student, an undergraduate double major of management policy and marketing, has begun to look at Marten, a graphical programming tool. The class is team taught by several faculty members across the campus. Dr. P begins his lecture with a basic introductory course in logic. Lilly, the liberal arts major, has taken courses in basic logic and philosophy and is comfortable with the material. Nico, the computer science major, has had a basic course in logic and mathematical proofs. The lecture begins to stir memories from a fast paced summer course. Sam is neither a programmer nor familiar with logic. Sam’s strength is analysis and project management. Dr.P’s instructions are to present an analysis of the problem. The group decides to meet to brainstorm about the problem. The transform functional form algorithm (TFFA) problem requires the students to examine a string that represents a logic statement whose validity is to be determined. In particular, suppose the logical statement appears as The problem according to (Barwise and Etchemendy, 2003) can be characterized as the following: “Start at the beginning of the string and proceed to the right. If there is a quantifier or an atomic sentence (e.g. Py), begin to underline. If you start with a quantifier, underline it and its corresponding formula. This will either be enclosed in parentheses (e.g. or just an atomic sentence (e.g. ). Assign each underlined constituent a letter (A, B, C…). If a prior identical constituent used the same letter, use that same letter. Otherwise, use a new letter.” Incidentally, the resulting string is: Unfamiliar with logic, this statement appeared to be a confusing string of characters. The group consulted a logic book, The Logic Book (Bergmann M., Moor J., and Nelson J., 1998). Fortunately, then the group literally transforms the problem in such a way that a solution actually exists. With some reflection, the students can approach the TFFA problem as a string manipulation problem, without really understanding the problem in a logic context. They restate the problem as a more general problem: Suppose string S is composed of substrings , where a marks the position of the first character and w marks the last character of . Transform S by substituting each substring with a corresponding element β from the set . If two substrings Si and Sj are equal, then use the same element from t. That is, With this new view of the problem, the students feel confident enough to attempt a solution using a program that matches their combined programming experience. Marten (Marten 2006) is a MacOS X software development that allows the student to draw the code, linking icons to direct the flow of data processes and control. The early prototype was Prograph CPX (Steinman and Carver, 1995) and Marten was released spring 2007.This version permits the implementation of objects, which will be shown in this paper. The debug feature allows the execution of methods if the code is current. The students decide on a naïve string search. That is, each character will be examined. Moreover, the students need two string functions to solve the problem. The “prefix” method divides a sting of length n into two sub strings, one of length K and the remaining n-K charecters.In FIGURE 1, α is the current position of the current character.
Attitudes towards Career Women Roles in outlook of Family-Social Surroundings: Perspectives from the UAE
Dr. Fauzia Jabeen, Abu Dhabi University, Abu Dhabi, United Arab Emirates
The present study examines United Arab Emirates career women’s attitudes towards their roles held by a sample of 944 participants. To find out the effect of family social environment, work place culture and self esteem on their attitude, the respondents completed the scale on Attitude towards women roles, Family Environment Scale, Self-esteem Inventory. The results reveal that greater the non-traditional attitude expressed by high self-esteem females, the more indications were observed of their family environment being marked with social change and liberalism, whereas low self-esteemed females were found to strive for personal advancements and also experienced less support from their families. This study has provided some insights into the factors associated with attitudes towards career women in the UAE and contributes the ways to enhance the self-esteem of UAE career women. The world today is witnessing rapid developments in different fields. It is undergoing major transformations because of the rapid technological advances in the field of production, distribution, communication and information. These developments have given rise to the concept of the small global village. Arab countries no longer live in isolation from international events but form an integral part of this world. They have succeeded in handling some of these developments, but in other fields, especially the economic, social and institutional ones, they are still lagging behind. Further in this step, a common focus is on empowering Arab women so as to reinforce their role in political, social and economic development and provide them with opportunities to build up their capabilities without gender discrimination. The empowerment of women via education, training, healthcare and provision of job opportunities will contribute towards boosting the standard of universal development. The issue of measuring career women’s attitude towards women’s role has been an important concern in the twentieth century. It became of particular importance in the Arab world when women began to enter into the labor force in record numbers. For example, in 1960, women in Arab world constituted only 12 percent of the labor force. While in 1995 they constituted 30 percent. In 1980 UAE women constituted 3.4 percent of the labor force. By 2006 this figure had only risen to 13.6 percent, despite the fact that the majority of University graduates are women (Source: UAE in Figures 2007; Ministry of Economy, UAE). Attitudes towards women involve expectations directed at women (Spence and Helmreich, 1972), and these ideas are often based on negative stereotypes and broad assumptions about women’s characteristics (Conway and Vartanian, 2000). Research indicates that the gender roles commonly lead to the discouragement of women’s employment outside the home in non-traditional jobs (Heilman, 1997; Schreiber, 1998). Females enter adulthood by assuming the roles of worker, spouse and parent, where as males relatively plan their vocational career and life course independently. In Middle East countries, where females are more guided by environmental and psychological factors, they are forced to consider the choice between family and career or opt for both. The responsibility for not adopting non-traditional roles or raising self-esteem, in various age groups, lies on both, the attitude of family members and the females themselves. Whittake and Christine (2001) in line with the above findings for relationship between multi-dimensional family functioning and personal growth accounted for as significant factors for women’s high personal growth and avenues for development. Research on attitudes to women’s roles showed over the last two decades or so a universal trend of increasing liberalism and acceptance of more egalitarian role definitions, especially among women (e.g. Allan and Coltran, 1996). Twenge (1997), in a meta-analysis of literature on attitudes towards women (1970-1995), suggested that mean scores of attitudes towards women were strongly, positively correlated with the year of the study. Twenge argued that this statistic suggests a trend toward more liberal attitudes towards women over the course of this period. However, Arab societies seem to be reluctant to abandon their traditional viewpoint of women primarily committed to the house and children (El-Jardawi, 1986; Abdalla, 1996; El-Rahmony, 2002; Orabi, 1999). Most Arab men consider households and domestic activities suitable for women and most Arab families educate their sons rather than their daughters on the assumption that boys are a greater economic asset than girls (El-Ghannam, 2001, 2002). Some conservative Arab societies like Saudi Arabia are completely dominated by men: women cannot go out in public without being covered from head to toe in black. They cannot drive, nor can they run a business in their own name (Gulf News, 2004). As a result of these traditional viewpoints towards women in Arab societies, Connors (1987) found that the majority of women are employed in three occupations: elementary school teacher, secretary, and nurse. Arab countries scored high in Hofstede’s (1980) masculinity dimension. Although little work has been done to specifically examine the role of masculinity in relation to sexist attitudes, Western researchers (Archer and Rhodes, 1989; Spence, 1993) have confirmed the contribution of masculinity to sex-related attitudes, implying that these sex-related traits significantly influence biased sex-related attitudes. The strong emphasis in Arab culture on masculine role attributes (Dedoussis, 2004) is expected to contribute to the UAE society’s traditional attitudes towards women managers.
Porter’s Generic Strategies and Environmental Scanning Techniques: Evidence from Egypt
Dr. Mansour S. M. A. M. Lotayif, Beni Suef University, Egypt
The current study aims at identifying the causality relationships between scanning activities and the adopted strategy; Porter’s generic strategies and organizational performance, scanning activities and organizational performance, and demographics and porter's strategies. The experiences of 243 Egyptian executives were utilized to achieve these objectives. Throughout multivariate analytical technique (e.g. Multiple Regression), and bivariate analytical techniques (e.g. correlations) the data were analyzed. SPSS and Stat graph were used in this perspective. “Why firms obtain different performance?” and “does formulating and implementing a strategy make a difference to performance?” are the two queries that the current paper starts by, as did Farrell et al., (1992); and Rumelt et al., 1994 and Claver, 2003. It is claimed that the external environment (i.e. industry and customer characteristics for instance) of any organization determines to large extent its adopted strategy (Murray, 1988) for reaping the competitive advantage. Porter (1980) defined strategy as choosing to deliver a particular kind of value, rather than just trying to deliver the same kind of value better. To reap a competitive advantage, Porter (1980) introduced three main strategies for that purpose i.e. cost leadership, differentiation, and market niche leadership (focus). These generic strategies remain the most commonly supported and identified in strategic management textbooks and literature (Allen et al. 2007; Dess et al. 2004; Wheelen and Hunger, 2004;Thompson and Stickland, 2003; David, 2002; and Miller and Dess, 1993). Nowadays, Porter generic strategies represent the benchmark for reaping a competitive advantage to the extent that Japanese authority created a new prize called “Porter Prize”.
Applying IFRS 8 Operating Segments in the Context of Segments Reporting in Jordan
Dr. Mohammad Yassin Rahahleh, Al-alBayt University, Mafraq-Jordan
The study aimed at identifying the level of implementation of IFRS 8 Operating Segments within the Jordanian firms in the context of preparing their financial reports, obstacles impede their implementation, and identifying the differences between both IAS 14 and IFRS 8. The results of the study show that Jordanian firms disclose by 73% about their operating segments as per the terms of IFRS 8. This might be contributed to the constraints impeding the implementation of the standard, most importantly the consecutive and constant changes that applied to the standard, the high costs of providing the required information, and the low level of awareness of the content of the standard itself. The study recommended that the issuing international organizations of standards should conduct field studies to explore the practical realities of standards from time to time prior carrying out any amendments to the standards they issue. Also, the culture of disclosure and awareness of the international standards can be promoted through incorporating them in the curriculum of the Jordanian universities. In proportion to the international and regional economic environment represented by globalization, trade liberalization, investments flow between countries of the world, the development of international financial markets, and the rise in the number of multinational firms; it turns out to be essential for firms to disclose their segments activities or affiliates in their reports. Indeed, when such firms, having geographical or commodity segments, fail to fully disclose the related information, it becomes difficult to detect deficiencies in these firms or their affiliates.
The Impact of a Diversified Strategy and Annual Report Information Disclosure on Market Performance: Evidence from Taiwan’s Financial Industry Firms
Dr. Ou-Yang Hou, Kun Shan University, Tainan, Taiwan
This study examines the association between a diversified strategy, annual report information disclosure and the market performance of financial industry firms in Taiwan. Using a sample of 88 interlocking groups and observations of 366 financial firms listed on the Taiwan Stock Exchange (TSE) and Taiwan’s over-the-counter market from 2003 to 2007, the Entropy method is adopted to measure the degree of group diversification. Following the regulations of the Security and Futures Institute evaluation system in Taiwan, content analysis is used to measure the degree of annual report information disclosure. The empirical results show that the group's total diversification (DT) and group related diversification (DR) have an insignificantly positive effect on a company’s Tobin’s q, while group unrelated diversification (DU) has an insignificantly negative effect on a company’s Tobin’s q. Annual report disclosure also has an insignificant and positive impact on Tobin’s q. However, financial and operational information disclosure and components of the board of directors and the ownership structure disclosure have significantly negative and positive effects on company's Tobin’s q, respectively.
The Interaction between Teamwork and Organizational Commitment Influenced by Organizations' Characteristics in Electronics Companies
Dr. Yin-Che Chen, National Hsinchu University of Education, Hsinchu, Taiwan
The purpose of this study was to determine whether or not any association existed between organizations’ characteristics and two organizational interventions, teamwork and organizational commitment, in electronics companies on Taiwan’s stock market. The most significant aspect was to offer an alternative perspective to the interaction between teamwork and organizational commitment modified by organizations’ characteristics. Data were analyzed by Structural Equation Modeling (SEM) approach to establish conceptual models for electronics companies. The most representative finding from the data indicated that the interaction between teamwork and organizational commitment were highly associated. In the end, recommendations for HRD and HRM practice, methodology, and future emerging and valuable research were included. Today, it is no exaggeration to say that the most well-known and remarkable impression of Taiwanese industry is the highly developed electronics and information industry exports (J. Wong, 2003). Besides, due to the increasingly influential role in regional and global economies, companies in Taiwan particularly emphasize internal coordination among different units and external industrial collaboration. As a result, in accordance with these two important orientations, teamwork and organizational commitment have been considered part of the highly promising interventions and have generated much discussion for their potential in organizational development and integration in Taiwan. Two dimensions represented the main problem that this study attempted to investigate: the misapplication of teamwork and the emerging challenge of organizational commitment due to changes in regulation. Presently, teamwork has been recognized by many companies as an important factor influencing organizational effectiveness and efficiency. Nevertheless, organizations were not quite sure what teamwork was and how to apply it satisfactorily in their own contexts. For instance, in order to enhance organizational competitiveness, improve operating systems, or upgrade quality of service, organizations established many different types of teams to deal with various problems: problem-solving teams, cross-functional teams, self-directed teams, or managed-work teams. Unfortunately, the number or the size of teams did not necessarily translate into the expected result. Instead, the key to successful teamwork depended on the both internal and external characteristics within an organization, not just the classifications for established teams. In other words, teamwork was likely to be misconstrued theoretically and implemented inappropriately within organizations. High-tech companies in Taiwan have been growing rapidly in the past decades due to governmental support and global demand. Profit sharing programs or plans, a strategy in which Taiwanese companies had long issued bonus shares to boost morale and reduce turnover rate among employees, had been considered to be an influential success factor in high-tech companies. This practice was particularly popular among high-tech companies where salaries tended to be lower than those in their counterparts in the West and Japan. However, starting on January 1, 2008, companies were required to list their employees’ bonus shares as expenses in their financial books because the Taiwanese government aimed to better conform to international accounting standards and practices. As a result, organizational commitment became a more compelling challenge due to the potential rise in employee turnover rates. Thus, the study added the ratio of employee profit sharing as an emerging independent variable between the target population and organizational commitment in Taiwan. The main purpose of this study was to determine the existence of an association between organizations’ characteristics in electronics companies on Taiwan’s stock market, and of two organizational interventions: teamwork and organizational commitment. In addition to contributing to the field of human resource development, the unique aspect of this study was that it also offered an alternative perspective to the interaction between teamwork and organizational commitment.
Factor and Correlation Analyses of Tourism Attraction, Tourist Satisfaction and Willingness to Revisit – Evidence from Mainland Chinese Tourists to Taiwan
Pen-Fa Ko, Chinese Academy of Sciences, Beijing, China
Dr. Yung-Lun Liu, Toko University, Puzih City, Chiayi County, Taiwan
This study identifies the key factors of tourism attraction, particularly to the tourists from Mainland China. It argues the necessity of knowing what they want to visit, and then analyzes the correlation of tourism attraction, tourist satisfaction and willingness to revisit. The new wave of Mainland Chinese tourists is a catalyst for economic growth. Many countries are actively developing their tangible and intangible assets as a means of gaining a comparative advantage in an increasingly competitive tourism marketplace. Tourism research has demonstrated that attraction studies are necessary to understand the elements that encourage people to sightsee. In order to understand tourism attraction factors and its correlation to tourist satisfaction and willingness to revisit, it is important to understand these components and their mutual relationships. The "Measures for the Administration of the Overseas Tours of Chinese Citizens" enacted on July 1, 2002; this allowed citizens of Mainland China to travel abroad more freely. Since 2002, the Mainland China tourists have begun playing an important role in the world tourism market. The estimated number of outbound tourists from Mainland China will reach 52 million in 2010, up 19.0% over 2009. Moreover, with the continuing appreciation of RMB (China currency), the increase in disposable income and leisure time, the projected number of outbound tourists from Mainland China is predicted to reach 100 million in 2020. Outbound tourists from Mainland China spent US$21.8 billion in 2009, and the average consumption by one person was RMB8, 800(Approx. US$1,300.-), of which 71% was on shopping, 13% on entertainment, 12% on sightseeing and 1% on food.
An Assessment of Financial Literacy Communication among College Students
Dr. De’Arno De’Armond, West Texas A&M University
Advertising effectiveness research is nothing new; however, its application to financial literacy advocacy advertising messages is a relatively new concept. This study concentrates on financial literacy message effectiveness and whether a message of debt or savings is most effective as an advertising input tested against a control group. This study includes testing experimental groups against control measures along three consumer dimensions, cognition, affect, and conative. Particular findings of this project show significance among conative consumer dimensions when exposed to a financial literacy ad versus no exposure. The use of credit as an instrument of consumption by college undergraduates has increased significantly in recent years. A centralized theme of credit card use and student loans leading to student financial distress and debt is prevalent in much of the academic research and literature currently published. Dissaving, a condition occurring when expenditures exceed income, results from low income situations such that spending needs can not be met, or high income combined with a high propensity to consume, or unwillingness not to spend (Katona 1975). The college student is targeted, arguably at times in a predatory manner, by many financial services and credit card firms. Research and data have shown this college cohort to be low income while possessing a tolerant attitude towards debt (Davies and Lea 1995).
A Comparative Analysis of Recent U.S. Recessions: Evidence from Secondary Data
Dr. Manzoor E. Chowdhury, Lincoln University, Missouri
Dr. Sonia H. Manzoor, Westminster College, Missouri
The current recession that started in 2007 and lasted almost two years (unofficially) has been the longest recession since the Great Depression. There have been many papers written in the popular press describing the severity of this recession. The discussion has been general and less analytical in nature and mainly focused on the policy debate and the timing of the recovery. Many authors who had participated in this discussion reached their conclusion based on anecdotal evidence. Very recently, however, there have been some attempts to analyze the depth and severity of this recession using available data. This paper, as an addition to this current literature, uses data from the U.S. Bureau of Labor Statistics (BLS) and other secondary data from the literature to compare the current recession with two previous recessions. Results show that the current recession that officially began in December 2007 has been the worst since the Great Depression, especially in the area of job loss and its impact on household net worth. The current U.S. recession has been the longest and deepest since the Great Depression, and some commentators in the news media have started to use the term “the Great Recession” to describe the severity of this economic slowdown. The recession was contributed to by many factors ranging from sub-prime mortgage crisis, the collapse of the banking sector which was also tied to the mortgage crisis, to a synchronized slowdown in global economic activity, particularly in Europe. Many articles have been written in popular press such as Internet web sites and newspapers, but not so much in economic journals.
Employee Turnover: Causes, Consequences and Retention Strategies in the Saudi Organizations
Dr. Adnan Iqbal, Prince Sultan University, Riyadh, Saudi Arabia
Employee turnover has always been one of the challenges to the human resource managers and the respective employers in any fast growing economies including the Kingdom of Saudi Arabia. Most of the employers in the Kingdom are not aware of why employees choose to leave their organizations and why they stay. Employees who leave the organization’s request as well as those who leave on their own initiative can cause disruptions in operations, work team dynamics and unit performance. Both types of the turnover create costs for the organization. However, retaining their best employees; managers must make sure their organizations clearly communicate expectations about rewards, working environment and productivity standards and then deliver on the promise. Having said that employee turnover being such a serious problem in Middle-East organizations, there is limited research investigating it, especially studies on causes and consequences are scanty. This paper examines the causes of employee turnover, effects and suggests some strategies on how to reduce employee turnover within Saudi business context.
Conservatism Versus Verifiability and Relevance
Manuel Dieguez, Barry University, Miami Shores, FL
Dr. Rosalie C. Hallbauer, Florida Memorial University, Miami Gardens, FL
The principle of conservatism has been around in accounting for many years. During the 1900’s some criticisms began to be brought forward by various authors but conservatism still exists. This paper will briefly explore some of the historical background of conservatism, the pros and cons of conservatism and fair value accounting, fair value accounting, and the future of conservatism. According to Webster’s Unabridged Dictionary (online version), conservatism is: “3a: the tendency to accept fact, order, situation, of phenomenon and to be cautious toward or suspicious of change: extreme wariness and caution in outlook.” The early definitions have referred to conservatism as related to the balance sheet while more recent discussions, such as Basu, have related to conservatism and its effect on the income statement. Obviously, there is an interrelationship between the statements and what is done on one affects the other; the issue is more of one of perception and how the decision maker uses the information. Some accounting historians have traced concepts similar to conservatism as far back as Zenon, 256 BC whose aim was to “protect property through control of people” and whose actions reflected stewardship and conservatism, but not materiality. (Chatfield, 1977, p. 11) The doctrines of stewardship and conservatism were further developed in medieval agency accounting in England; the manorial steward, attempting to protect himself when audited, would underestimate manor profits. (Chatfield, 1977, p. 19, 29)
Malaysian Saving in the 1990s - Problems and Prospects
Sohail Bin Ahmed, University Technology Mara (UITM), Selangor, West Malaysia
Malaysia has saved an average 33.6% of GNP over the last two years, a level amongst the highest in the world. For many economists, this would be perceived as an indicator of superior economic performance and of buoyant future growth prospects. After all, Lewis (1954) called raising savings rates the "central problem of economic development ."(page 47). Domestic savings were identified as the key to raising investment, and investment was taken as a sign of growth. Since then, empirical evidence linking domestic savings and domestic investment (Feldstein,1989), and the external debt problems of LDCs seeking to break this link with foreign borrowing, have given fresh emphasis to the policy prescription to raise domestic savings. More recently, however, the idea that high aggregate savings levels drive growth has been questioned. Kaldor and others argued that income distribution would adjust to equate ex-post savings and investment, so that efforts to raise savings would be futile without a rise in investment. In this model, investment drives growth, which drives saving. The causality in the savings/growth relationship has been analyzed econometrically; for example, Ortmeyer (1980) concludes that income drives saving rather than vice-versa. This seems to fit the East Asian experience well, and has been interpreted as evidence in favor of the permanent income hypothesis (Yusuf, 1984).
The Popular Competence of Hospitality Education in Taiwan: Constructing a Baking Curriculum Model
Dr. Jiung-Bin Chin and Mu-Chen Wu, Hungkuang University, Taichung, Taiwan
Ren Chuan Ko, Mackay Medicine, Nursing and Management College, Taipei, Taiwan
The baking competence has become a pivotal skill for hospitality education in Taiwan. However, very few studies related to baking competence have been published. The main objective of this study is to construct a baking competence curriculum model for the college students and related baking business. The study applies Delphi Method, in which advices from 15 experts include academic scholars, banking experts and actual baking proprietors have been collected. It is found that the model is 4 dimensions including knowledge, skill, attitude, and creativity and 47 detailed indicators by this research. Since the Nationalist Government moved to Taiwan in 1949, both the ruling party and opposition started to place more importance and promotion of flour and wheat food to save cost on rice and to promote national health. U.S Wheat Associates and the Taiwan Flour and Wheat Food Promotion Committee collaborated in 1967 to launch training programs for baking professionals and started to cultivate baking practitioners. The Government even introduced Japanese baking masters to give lectures in workshops in 1979, which brought the baking industry with new technology and new horizon.
Key Factor Influencing the Reusing of Historical Buildings in Taiwan from the Viewpoint of Public Private Partnership
Dr. Kang-Li Wu, Assistant Professor of Department of Urban Planning,
National Cheng-Kung University, Taiwan R.O.C.
Promoting the regeneration of historical buildings by introducing suitable economic activities as a means to generate economic and social benefits has become a global trend. However, due to the limit of financial resources and management technology of the public sectors, how should the reuse and management historical buildings be implemented through the concept of public private partnership (PPP) that the results can benefit historical buildings, investors, as well as the society remain a critical research issue. Using the historical buildings that have employed PPP in Taiwan as empirical cases, this study attempts to explore this research issue. By incorporating research methods involving interviews, field survey, fuzzy theory, this research identifies the key factors influencing PPP in the reuse of historical buildings in Taiwan. According to the finding of our empirical study, this paper also provides suggestions for heritage reuse through PPP in order to create a win/win situation for the historical buildings and involved investors. Historical buildings are valuable economic and cultural resources to promote place marketing and tourism development.
A Study of the Role of Perceived Risk in Continuing Education Participants’ Learning Motivation
Ching-wen Cheng, Ph.D., National Pingtung University of Education, Taiwan
In the age of lifelong learning, the demands of adult learning are more and more important. Following this trend, many universities focus on their continuing education programs. If a university tried to attract adult students to participate in its continuing education programs, it had to understand adult students’ learning motivation first. The purpose of this study is to describe the role of perceived risk in continuing education participants’ learning motivation. Based on the research data analysis of this study, adult students would consider each type of perceived risks when they decided to participate in a continuing education program. Therefore, the study result also stated that there was a significant difference on the consideration of perceived risk among adult students of different age groups and different education degree groups. Famous Taiwan scholar Yang (2005) pointed out that the traditional and formal education system would not provide enough professional learning in a global competition environment. He also stated that adult continuing education became more and more important because of the application of new technology and the disappearance of traditional jobs. To solve these new social problems, many governments of industrial countries tried to establish a learning society and develop a strong system of adult higher education. In this trend of lifelong learning, the university became the key institution to promote the community learning activities and help people to improve their competitive abilities（Longworth & Davies, 1996）.
A Study on the Failures of Sovereign Credit Ratings
Ana-Maria Minescu, Global Investment Strategy and Asset Allocation Manager
Private Banking, Unicredit Tiriac Bank, Bucharest, Romania
The current study aimed to offer an insight into what rating failure means and what the implications of such failures are. After explaining the various meanings of ratings failure and synthesising the potential causes and implications of ratings failure, the study performs and describes several tests on the ratings of a sample of 180 countries for the period 1996 – 2007. The results of the tests allow us to compare the performance of the three rating agencies and proves that the years with the highest number of ratings failure were years of financial crises. Following each financial crisis there has been a lot of talk on the failure of ratings issued by credit rating agencies to predict such crises. However, failure can be understood in multiple ways: failure of a rating to predict default, failure of a rating to be stable, failure of multiple agencies to issue similar levels of credit rating for the same country at the same time, or failure of an agency to issue a rating in the correct category of ratings (ie investment grade or junk). Ratings are important for many reasons, among which: their ability to indicate the level of interest rate at which a government can borrow in the financial markets and the ceiling impact of a sovereign credit rating on the local corporate ratings (as explained in Minescu (2010) with regard to sovereign credit ratings).
Honey Production Business Opportunities and Its Externality Effects
Georgina Arvane Vanyi, Dr. Zsolt Csapo, and Dr. Laszlo Karpati, University of Debrecen, Hungary
Bee-keeping and honey production has a long history in Hungary. Honey is an important and healthy food of people and it can be consumed without any human processing. The honey production has important role, too. Some researchers say that if honey bee will extinct the humanity in the world would also extinct. It is true since plant pollination by honey bees is very important. It is confirmed by researchers’ studies that plant pollination by honey bees has significant positive external impacts on potential yields in orchards. Although the contribution of honey production to the GDP in Hungary is only a few per cent, other benefits play more important role. One of them is the positive external effect – mentioned above – and the other is the contribution to the biodiversity of the nature. This paper focuses on secondary research methods, gathering and evaluating data regarding the positive external impacts of plant pollination by honey bees as well as finding possible solution for the problem that bee-keepers have a lot of costs in connection with carrying honey bees to orchards, while farmers “only” benefit from the positive externality of plant pollination of their fields. To evaluate its economic effects a numerical HEEM-model was developed and applied for the Hungarian situation.
The Effectiveness of the REU Program among Novice Undergraduates
Sha Li, Yong Wang, and Elica Moss, Alabama A&M University, Normal, AL
America is facing the challenge of losing young researchers in science. Minority students are especially underrepresented in the areas of research in agriculture and natural resources. To boost the minority students’ interest in agriculture, and tap their potential in learning to conduct research, Alabama A&M University provides a program of the Research Experience for Undergraduates (REU). This program offers a good experience for diverse undergraduate students to learn to conduct research. The uniqueness of this program is that the majority of the participants are minority students, and two high school students are recruited into this program to learn to do research alongside the college undergraduates. Beyond learning the content and methodologies of research, the students also generate interesting findings from their projects. The experience has made a positive impact on the students’ academic growth. This REU program indicates that students of diverse backgrounds could be excellent researchers when they have the equal opportunity to learn.
Copyright 2000-2017. All rights reserved