The Business Review, Cambridge
Vol. 9 * Number 2 * Summer. 2008
The Library of Congress, Washington, DC * ISSN 1553 - 5827
Online Computer Library Center * OCLC: 920449522
National Library of Australia * NLA: 55269788
Peer Reviewed Scholarly Journal
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind review process
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Business Review, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Business Review, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1553-5827 issued by the Library of Congress, Washington, DC. No Manuscript Will Be Accepted Without the Required Format. All Manuscripts Should Be Professionally Proofread Before the Submission. You can use www.editavenue.com for professional proofreading / editing etc...The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue.
The Business Review, Cambridge is published two times a year, December and Summer. The e-mail: firstname.lastname@example.org; Website: BRC Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via our e-mail address.. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright: All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, including photocopying and recording, or by any information storage and retrieval system, without the written permission of JAABC journals. You are hereby notified that any disclosure, copying, distribution or use of any information (text; pictures; tables. etc..) from this web site or any other linked web pages is strictly prohibited. Request permission / Purchase this article: email@example.com
Copyright 2000-2018. All Rights Reserved
Deploying RFID in Logistics: Criteria and Best Practices and Issues
Mwema Powanga, Regis University, Denver, Colorado
Dr. Luka Powanga, Regis University, Denver, Colorado
The Radio Frequency Identification (RFID) technology utilized in a variety of proprietary applications such as EZ pass tags in toll-road and toll-bridge payment systems, luggage check-in at airports and remote car door access, has gained traction as a supply chain management tool. The traction was accelerated by Wal-Mart and the Department of Defense who simultaneously started requiring vendors to equip consignments destined to their warehouses with RFID tags. Other firms such as Albertsons, Target, Kroger, CVS, Lowes, Gillette, 7 Eleven, Home Depot, Metro AG headquartered in Germany, Tesco based in the United Kingdom followed suit either by issuing similar requirements or experimenting with the technology. Despite this impetus, mass acceptance of the technology has proved elusive attributed to the infancy of the technology as a supply chain tool. This white paper distils the best practices from the experience of four of the world’s largest organizations; Wal-Mart, Tesco, Metro Group, and the United States Department of Defense DOD) that can be used by firms wishing to implement an RFID infrastructure. The paper begins with an overview of the RFID technology and how it is used in a supply chain environment to present a background against which the ensuing discussions are framed. A summary of the RFID adoption by each of the organizations under discussion is presented followed by the best practices and conclusions. The RFID technology encompasses any electronic system employing radio or electromagnetic waves to collect, store and retrieve digital data that uniquely identifies an item usually a serial number called electronic product code (EPC) similar to a Universal Product Code (UPC). The EPC matching the identifying information about a product stored in a single or networked database is programmed into a tag equipped with a miniature chip to store the data. The tag contains an antenna to receive and respond to radio-frequency queries from the reader. The reader interrogates the tag for information by generating a magnetic field bubble. The tag once it enters this bubble is activated and transmits the EPC to the reader. Through a network interface, the EPC is passed on to the database where information related to the tag is identified and retrieved. Two types of tags exist; active tags which have their own power source (battery) and are therefore “always on” and the passive tags which are activated by the energy generated by the reader at the time of interrogation or “on when needed”. The RFID technology has several distinct advantages over conventional tracking and authentication based tools like the bar code (Universal Product Code or UPC). One is more information can be stored on the tag. Two the tags do not require a line of sight to be read because the data communication occurs over a wireless medium. Three multiple tags can be read simultaneously in contrast to bar codes which need to be scanned one at a time. The fourth and final advantage is there is no need for human intervention at the scanning and authentication stages. In a retail and supply chain environment, the RFID system functions as illustrated in Figure 1. When a foreign or local manufacturer issues an RFID tag, a unique identification number (EPC) is generated and stored in the tag database where it is matched with the description of the pallet’s contents and the information is shared with the retailer’s tag database either directly or via a third party who makes the information available for queries by the buyers usually through a web based network. The information captured by the reader at each node in the supply chain is passed on to the tag database at the retail store and the manufacturer enabling both parties to track and identify the pallet as it moves through the supply pipeline. At the store, individual items can be tracked to the shelves and the check out points allowing for better availability of the products on the shelves. Following is a brief discussion of the experience of Wal-Mart, Tesco, Metro Group and the Department of Defense following the deployment of RFID technology in the supply chain and store operations. In June 2003, Wal-Mart the largest retail firm in the world announced its intention to adopt the RFID technology to integrate the RFID data with the automated electronic interchange technologies to improve the supply chain efficiency. On April 30, 2004, shortly after the announcement, Wal-Mart started pilot testing the RFID technology using 21 products out of the more than 100,000 products found in a typical super center. The products from its top eight suppliers namely, Gillette, Procter and Gamble, Johnson and Johnson, Kimberley Clark, Kraft Foods, Nestle Purina Pet Care, Hewlett-Packard and Unilever were fitted with RFID tags at pallet and case levels before shipping them to Wal-Mart’s regional distribution center in Sanger, Texas, for onward transportation to eight stores in the Dallas-Fort Worth-Arlington metropolitan area. The data captured from RFID tags by readers strategically positioned throughout the supply chain would provide visibility into the flow of shipments and allow for making timely business decisions such as finding alternative routes for shipments in case of delays, or recovering cases misplaced in the backroom, or simply knowing when to restock the shelves thus reducing costs. For instance, Wal-Mart relied on employees for restocking the shelves by (a) physically inspecting the shelves and restocking the empty spots, and (b) scanning the products in the backroom to determine if there was space on the shelves for such products for products that may not be on the shelves yet. Both these methods are labor intensive and inaccurate. In some cases, products would be missing on the shelves even though they were in the backroom. The RFID technology would automatically determine the products that need to be replenished from the data generated in real time as the products are tracked from the distribution center, backrooms, store shelves and consumer purchases reducing stock-out situations and curtailing on man-hours. In fact, Wal-Mart’s estimated that costs in the form of lost sales from such manual processes constituted 10% of the total sales and implementing the RFID technology could reduce these costs by as much as 6-7% of those costs translating into potential savings of $2.2 billion if the 2006 revenues of $361 billion are used. Following the commencement of the pilot tests, Wal-Mart issued a mandate to its top 100 suppliers out of the 68,000 suppliers requiring that shipments of cases and pallets to its stores be fitted with RFID tags in place of bar codes by January 25, 2005 to begin a full-scale roll out. The RFID tag-enabled cases and pallets were initially limited to three distribution centers serving over 140 stores in the Dallas/Fort Worth Metroplex area. Texas was chosen as the launch site because of the high concentration of Wal-Mart’s operations in the area. Wal-Mart operates 92 discount stores, 196 Super centers, 26 Neighborhood Markets, 69 Sam’s Club locations and 12 distribution centers, employing more than 130,000 people.
Online Non-Proctored Testing and its Affect on Final Course Grades
Dr. Marian C. Schultz, The University of West Florida
Dr. James T. Schultz, Embry-Riddle Aeronautical University
Dr. Gene Round, Embry-Riddle Aeronautical University
The growth and escalating interest in online programs has brought about an academic inquiry regarding the implementation of online testing without proctors and its subsequent affect on course grades. This study examined four courses taught at Embry-Riddle Aeronautical University to ascertain if there was a significant difference in overall course grades between proctored and non-proctored examinations. The study found that in the case of all four courses, there was no significant difference between the course grades achieved by students taking proctored and non-proctored examinations. The mean grade for three of the four courses with non-proctored examinations was lower than when the examinations were proctored. Evaluating student knowledge, or the amount of learning which has occurred in a particular course, is a critical element in assessing whether the learning objectives of a course have been attained. Online delivery methods represent a paradigm shift in learning which has been enabled by new information technologies. The learning environment has moved from a provider based focus to one that markets to the consumer requests for efficiency and convenience. In its infancy, virtual learning was hampered by technology, or in this case, a lack thereof. Feenberg described the equipment as “expensive and primitive.” “The complexity of basic computer operations in those days was such that it took a full page of printed instructions just to connect” (Feenberg, 1999). Distance learning has been in existence for over 100 years, but until 25 years ago testing was consistently administered through the pencil and paper format. In 1982 the American College introduced Examinations on Demand, a program which allowed the students to take a test for a course when desired. While online examinations now allowed the students to take a test at their personal office or home, rather than in the stressful environment of the classroom, the situation led to the problem of ensuring the identity of the person who was taking the test since it was no longer proctored (Brewer, 2005). Embry Riddle Aeronautical University is among the universities that provide the consumer with both the traditional and online degree programs. The continuing expansion of Embry-Riddle Aeronautical University’s Worldwide Online learning program brought about a change in 2004 affecting how students would be tested. The increasing challenge to strengthen student retention initiated the move from proctored to non-proctored examinations. The consumer’s focus on convenience and multitasking in their pursuit of academic accomplishments has pushed the degree granting institutions toward consideration of online nonproctored exams. The purpose of this study was to determine if there is a significant difference in the overall course grades between proctored and non-proctored online examinations. Four courses were selected for the study: Marketing (MGMT) 311, Financial Accounting (MGMT) 210, College Mathematics for Aviation (MATH) 111 and Aviation Legislation (ASCI) 254. The challenge for educational institutions is to ensure that online classes increase student accessibility to educational programs, as well as increase the quality of those same opportunities. There are academic discussions that have been responsible for obstructing the growth of online courses. With its legitimacy in question, and the emergence of so many fraudulent programs offering degrees, there may be a justifiable reluctance to trust online learning. The spotlight on online learning has shifted from the educators’ plight of “can we” regarding development, to the administrators’ decision of “should we,” in terms of resources, marketability, retention and revenue. “One of the earliest perceptions about online learning was that it was of lower quality than face-to-face instruction” (Sloan Consortium, 2004). In making that evaluation, the first logical and reasonable criterion would be the quality of the instructor. Who is it that is teaching these courses? Primary core faculty members account for 65% of the online course instructors, as compared to the 62% who teach traditional face-to-face classes. Taken further, 74% of Public colleges report that core faculty members actually teach their online courses, while only 61% teach face-to-face classes (Sloan Consortium, 2005). Planning, preparation and communication are all critical components to a successful and meaningful online learning experience. In 1995 alone, only a third of higher learning institutions offered some form of virtual learning, with roughly 754,000 students enrolled (National Center, n.d.). Today, virtual learning in higher education has become almost as common as traditional face-to-face classes. Faculty have questioned whether online programs can continue to sprout up in the midst of the issue of competitive quality of online degree programs. A core competence of the university includes providing a knowledge base that will push students to the forefront of their competitive industry, enforcing academic rigor regardless of the online or traditional testing methods. It is imperative that determination of continued development of online programs be predicated upon quality of learning, student satisfaction and public perception. Students should be able to expect that the quality of a program prepares them to compete effectively in their chosen career. In the rush to fill the academic menu with numerous online programs available to the consumer, there may be a lack of control over the quality of these degrees. Are students contracting with the university to provide a competitive warranty to coincide with the degree? The financial objectives behind the online venture have been the backdrop for the accusation that the institution’s reputation, by way of their graduates’ successes, may be a possible sacrifice as more online programs are making their way to the consumer. The concern for the inherent quality of the online environment has brought out one of the decisive issues. A core debate regarding the effectiveness of online courses concerns the integrity of examination grades earned by online students in a non-proctored setting. Are students put in a position to fall back on cheating as a survivor tool, or is the university responsible for allowing insecure test sites? The issue of cheating, plagiarism and adopting the risk factor that coincides with integrating outside sources to provide correct answers on relevant exams has been around for a very long time. Faculty are required to add lengthy discussions to official course syllabi, warning students of the consequences of cheating in any form. Subsequently, faculty have been trained in the use of various software such as Turnitin, that are designed to work with an extensive database in order to evaluate documents and forecast the probability that a section of a document may have been “borrowed” from an outside source without giving full credit to the author. Undoubtedly, the online programs confront ever-increasing methods to circumvent the system of testing. The loss of credibility is major side effect when students consistently can be associated to the cheating trend of the high tech era. There are no guarantees that the student completing an exam in an online environment is the person who enrolled in the course initially. In spite of great personal risk, students tackle the deception to its fullest and often incur minimal issues of guilt. Another casualty of the online deception trend is the rise and fall of the credibility of the university associated with the find of cheating. The credibility is instrumental in recruitment, fund raising, grants; all of which are linked to money.
A Quantitative Assessment of Factors Impacting the Price of a for-Profit Education Stock
Dr. Robert L. Johnson, University of Phoenix, Phoenix, AZ
For-profit education has been around for over thirty years now becoming big business. The goal or question to be answered was if a relationship could be discovered between certain financial and economic metrics one quarter and the price of a for-profit education stock the following quarter in order to forecast the price of the stock. Today we live in a post 9/11 era, sub-prime mortgage meltdown, and a prolonged war on terror, and theories and models developed twenty to fifty years ago may or may not still be as reliable. It would be remiss not to take a fresh look even if to revalidate some older theories or perhaps suggest there may be some new ways of looking at them. For-profit education has been around for over thirty years now. It is increasingly becoming big business as institutions such as the University of Phoenix, DeVry Incorporated, Strayer Education, Corinthian College, ITT Education Services, and others have made solid headway into the postsecondary arena with campuses in multiple states, online, and even other countries. This has come about due to a variety of factors. “Globalization and the revolution in technological communications are major forces of change in higher education. This environment, when coupled with the needs of adult learners and the rising costs of tuition at traditional colleges and universities, has stimulated the emergence of for-profit, degree-granting higher education in the United States” (Morey, 2004, p. 131). Competition in a free market environment has motivated educational institutions to develop efficiencies, curriculum students want, improved customer service, modalities such as evening classes or distance and/or online classrooms. The latter significantly enlarges one’s market. Today’s students want their schools to be “. . . convenient, accessible, high quality for low cost, open during the evenings and on weekends, and have helpful staff, available parking, and no waiting in long lines” (Morey, 2004, p. 135). The for-profits have delivered on this. Most of all, students want access for those who live in remote locations and must work during the day. The for-profits have benefited from the working student in that they often have employee tuition reimbursement befits which helps maximize educational service’s revenues. Many such as Apollo Group (APOL), Devry (DV), Strayer (STRA), Corinthians Colleges (COCO), Career Education Corporation (CECO), ITT Education Services (ESI) and others have gone public selling stock on Wall Street creating a niche market dynamics and influences. This paper will attempt to discover if it is possible to develop a model in the format selling price (SP) = b0 + b1X1 = b1X1 + b2X2 . . . + bnXn for forecasting the price of a for-profits education using readily available financial ratios and economic metrics one quarter and the price of the stock the following quarter. Many would retort back citing the efficient market hypothesis and assert that the market sets the price and research and forecasting models allowing excess profits to be earned are not reality. However, as noted by Penman (2004) “prominent financial analyst Warren Buffett, believe they can earn consistent relatively higher returns than other investors” (Waldron & Seng, 2005, p. 101). Buffet has delivered mightily on that belief. Moreover, “Researchers have discovered some general anomalies to the idea of EMH theory” (Waldron & Seng, 2005, p. 101). These abnormalities include “. . . the price-earnings (P/E) ratio . . . size effect, book-to market effect, post-earnings announcement drift, Value Line effect, Briloff effect, January effect and Monday effect”(Waldron & Seng, 2005, p. 101). The January effect has been further documented by Haung and Hirschey (2006). It is possible the markets are not so efficient after all and is still the subject of much academic research. Earlier proponents of the EMH such as Fama (1995) concluded in later research that “through careful study of these fundamental factors the analyst should, in principle, be able to determine whether the actual price of a security is above or below its intrinsic value. If actual prices tend to move toward intrinsic values, then attempting to determine the intrinsic value of a security is equivalent to making a prediction of its future price . . . ” (p. 75). There appears to be some value in conducting research and that is the assumption of this effort. The U.S. and global economies are not static. They change and evolve over time. “Over the past century, the American economy has been transformed in many fundamental ways. These changes have affected the financial sector just as deeply as any other part of the economy” (Campbell & Shiller, 1998, p. 12). Today we live in a post 9/11 era, a sub-prime mortgage meltdown, new technologies improving productivity, and a prolonged war on terror, and financial theories and models developed twenty to fifty years ago may or may not still be valid as the economy has gone increasingly global, the political environment changes, the Euro dollar has risen in value, China and others hold billions of U.S dollars, and many business have come and gone over the years. It would be remiss not to take a fresh look even if to revalidate some older theories or perhaps suggest there may be some new ways of looking at them or even develop new ones. As brought out by Johnson (2003, p. 19-20) It does appear that there is a relationship between the price of a stock and general economic factors such as interest rates, unemployment rates, and certain financial statement parameters such as net profit margins, earnings per share, selling, general, and administrative expenses as a percent of sales, and other various indicators (Baker, Powell, & Daniel 1999; Lander, Orphanides, & Douvogiannis 1997; Leung, Daouk, & Chen; 2000; Lo & MacKinlay 1988; Peterson & Peterson 1995; Trahan & Bolster 1995). Why the focus on education stocks? “For-profit education is thinking outside the box in its academic, operational, and marketing strategies and outside traditional geographic boundaries” (Johnson, 2003, p. 16). The recent rise of distance and online education has eliminated the geographic barriers and allowed access to a global market for today’s for-profits educational organizations. There is always room for research in the quest for a reasonably reliable forecasting model for decision making in purchasing stocks using traditional and outside the box thinking while taking into account environmental changes in the global economy and financial markets. This researcher believes higher returns can be earned than by those who just randomly pick stocks because of the belief the market has decided and there is no benefit to conducting research and analysis. What about the efficient market hypothesis (EMH) and the capital asset pricing model (CAPM)? Both of these theories are hotly debated with strong feelings on both sides each pointing to his or her research supporting their view. The assumption of this effort is Fama and French’s (2004) article suggesting “the empirical record of the model is poor-poor enough to invalidate the way it is used in applications. The CAPM's empirical problems may reflect theoretical failings, the result of many simplifying assumptions. But they may also be caused by difficulties in implementing valid tests of the model” (p. 25). Perhaps with better testing the debate may be able to be settled and possibly other or more accurate theories and models developed.
A Comparison of U.S. Corporate Governance and European Corporate Governance
Abigail Barnett, Sam Houston State University, Huntsville, TX
Dr. Balasundram Maniam, Sam Houston State University, Huntsville, TX
This paper describes the corporate governance models of both the United States and Europe. The shareholder model of the U.S. and the United Kingdom will be compared on terms of recent changes, as instituted by the Sarbanes-Oxley Act, the Combined Code on Corporate Governance, and various securities exchanges’ listing rules. The stakeholder model of Germany and how it differs from the shareholder model will also be discussed. Many of the recent changes in corporate governance standards are the result of regulation changes in the area of director independence. There is a call to increase the independence of the board of directors and specifically the audit committee to enhance directors’ ability to perform their duties and protect shareholders’ investments. These changes have stemmed from recent corporate scandals and it will take time to determine how effective these changes are. Corporate governance has been defined in many ways, however, it refers the oversight of corporations and the methods employed to assure the corporation’s actions meet the interest of concerned parties, or stakeholders. Corporate governance typically focuses on how mitigate the agency problem that arises when ownership and management of the firm is separated. This may be mitigated through several means such as the oversight of management by the board of directors, compensation and incentive arrangements, internal controls, external audits, and regulatory oversight. Recent corporate scandals, in both the United States (U.S.) and Europe, have brought about recent changes in the standards of corporate governance and are primarily intended to prevent fraud and better protect investors. A majority of these changes have focused on board structure, reporting requirements, and best practices. The focus on board structure after major corporate scandals is because corporate boards have been seen as inadequate at protecting both shareholders and stakeholders from these scandals, and change in this area is critical (Murphy and Topyan, 2005). Both the United States and portions of Europe have a shareholder corporate governance system, which is based on the theory that the firm’s objective should be to maximize shareholder wealth. As such, the corporate governance standards of the shareholder system are aimed primarily at protecting the rights of shareholders. Even though the U.S. and the UK have a shareholder based system, the rules adopted by both differ and are quite different from the stakeholder system. Outside of the United Kingdom (UK), there is another form of corporate governance in Europe, the stakeholder system. This system aims to protect the rights of all stakeholders, such as employees and lenders, and is usually demonstrated by the German system. It is important for investors to be aware of these differences in corporate governance policies as it relates to the risks investors are subject to and how investors’ rights are protected. An understanding of these systems becomes more important as the global economy expands and more foreign investments are made. The purpose of the report is to analyze the recent changes of the corporate governance systems of the U.S., the UK, and Germany that are impacting large publicly traded firms. This analysis will include a description of the three systems and highlight both similarities and differences between each. The report will lend more focus to the recent changes and trends, as well as the various regulatory agencies, such as the Securities and Exchange Commission, which are responsible corporate governance changes. Corporations within the United States are governed according to the shareholder model which is based on the theory that the firm’s objective should be to maximize shareholder wealth. Under this model the shareholders of large corporations elect a board of directors to hire and oversee the management of the firm. The directors are representatives for the shareholders and their actions can directly affect the actions of the firm and impact the shareholders’ investment in the firm. The board of directors is responsible for a number of functions which include selecting and overseeing the performance of the CEO and the executive management team, nominating other board members, ensuring the firm’s compliance with regulations, and overseeing the firm’s auditor selection and the audit process (Mintz, 2006). These various functions are carried out by a number of committees, primarily the audit committee, the remuneration or compensation committee, and the nomination or hiring committee. Additionally, in the United States the board of directors is typically directly linked to the management of the firm through the appointment of the CEO as the chairman of the board of directors. “In fact, this is the case for approximately 80% of the firms” (Aguilera et al, 2006, 148). While directors are accountable primarily to shareholders, they are also subject to many corporate governance standards. Publicly traded firms are subject to the regulations of the Securities and Exchange Commission (SEC), as well as the listing requirements of any exchange they may be listed on. These regulations are aimed primarily at protecting the rights of shareholders. As a result of the recent corporate scandals that have defrauded million of investors, many changes and rules have been implemented as an attempt to better protect the interests of shareholders. In these recent scandals executive management teams and boards of directors failed to protect shareholders from fraudulent activities and violated the ‘duty of care’ required of those positions (Mintz, 2006, 24). Many of these governance changes are aimed at improving the way in which the board of directors oversees a firm’s management on behalf of shareholders. In 2002 the Sarbanes-Oxley Act was passed, which has created significant changes in corporate governance rules for companies registered with the SEC. Many of the changes create by the Sarbanes-Oxley Act focus on the audit committee and director independence, and the Sarbanes-Oxley Act increases “the oversight responsibilities of boards of directors of public companies acting through their audit committees” (Grossman, 2007, 422). In addition to the new Sarbanes-Oxley Act, two of the most prominent exchanges in the U.S., the New York Stock Exchange (NYSE) and the National Association of Securities Dealers Automated Quotations (NASDAQ), both have also adopted new rules regarding board structure.
E-Commerce: On-Line Retail Distribution Strategies and Global Challenges
Dr. Kamlesh T. Mehta, Peace College, Raleigh, NC
In the 21st Century, e-commerce has become the frontier of global business and is growing at an exponential rate. This paper addresses the role of the on-line marketing as it relates to the distribution strategies, strategic on-line challenges faced by multinational companies, positioning strategies for engagement on the Internet, existing distribution channels and the Internet, and challenges for global retailers and manufacturers. Several examples of companies using the Internet distribution strategies are discussed. The development and implementation of the Internet distribution strategies among global companies are very difficult and complex. They can be disruptive, thus global companies need to realize that building a global market presence does not automatically translate into global competitive advantage. The advancements in e-commerce applications will reinvent important business aspects of business organizations, redefining the ways global companies approach the on-line marketing in the future. With the increase in globalization of business activities, the use of the Internet has been on the rise. The Internet is no longer simply a better way to publish and distribute information. It has become the conduit for the billions of information exchanges that help make up daily life, thus, exerting a significant impact on corporations as well as people. Shoppers today are faced with more choices of where to shop, how to shop, and what to buy than any mere mortal could intelligently comprehend and use (Lampert, 2007). As the Internet has expanded, exciting predictions have been made about its possible role as a global business and marketing tool. According to the results of Industrial Distribution magazine's 61st annual survey of distributor operations, distributors will grow their business by using technology (Keough, 2007). According to the survey results, 66% of respondents say customers use the Internet to make purchases, 52% expect Web-based sales to grow this year, and 70% believe their corporate website will be important to their future growth (Avery, 2007). Since the Internet is a radical new distribution channel, many established global companies fear that the Internet has the potential to hurt them; to be competence destroying instead of competence enhancing; compromise their distribution network assets rather than leverage them; and disrupt their industry leadership positions rather than reinforce their dominance. Thus, global corporations are faced with challenges created by the use of the Internet as a new distribution channel. To assess the disruptive capacity of the Internet as it relates to distribution strategy, global companies should ask the following questions: 1. To what extent does Internet distribution complement or displace existing industry distribution channels? 2. To what extent does Internet distribution enhance or destroy the company's core competences and distribution network assets? 3. How will an Internet distribution strategy interact with the company's existing conventional distribution strategy? Unfortunately potentials are difficult to quantify since the Internet is very intangible. It is hard to enumerate fully the size, scope, and characteristics of the market within which a business seeks to operate. In the United States, the most developed market in which the Internet is widely used, the market size is open to argument. According to Neilsen Consulting Inc. there are 51 million users, while IDC estimates size at 31 million. Global penetration of Internet usage also varies greatly. For example, Internet connections in Europe increased by around 60 percent over a single year to the end of October 1997 and it is forecast to reach some 11 million hosts by 2003. Recent growth has been faster than the rest of the world and Europe now represents more than a quarter of the global connections. At present, Internet penetration within Europe is patchy with strong growth coming from smaller Western European nations such as Norway, Finland and Ireland. Whereas the growth rate in two of the world’s most populous countries, China and India, is going to be five percent only (Anonymous, CMA Magazine, 2000; Deveaux, 1999). In Latin America, the use of the Web is likely to continue to grow explosively from 21 million in 2000 to 40 million in 2004, and by 2005 this number could exceed 77 million. Similarly, a brisk growth in online consumer spending is estimated to rise from $4.6 billion in 2002 to $8 billion in 2003 (Elkin, 2001). The purpose of this paper is to present issues concerning the disruptive capacity of the Internet as it relates to distribution strategy. The paper addresses the strategic Internet challenges faced by multinational companies, positioning strategies for engagement on the Internet, existing distribution channels and the Internet, and challenges for global retailers and manufacturers. Several examples of companies in various industries using Internet distribution strategies are reviewed. One of the most significant business trends during the 1990s has been the sharp increase in global business activity and, despite recent economic turmoil, there is no sign that this growth will abate. The explosive growth of the Internet and the World Wide Web -- technologies that are inherently global in character has been of equal or even greater significance. The online shopping experience has grown and matured (Barson, 2007) and the defenders of online commerce will often point to the improved speed and ease of doing business on the Web, in comparison with more traditional distribution systems (Trembly, 2007). The opportunities offered by advancements in Internet technologies are not yet fully exploited. As with any technology, there are a number of issues that need to be addressed before creating a successful presence on the Internet. Integration with existing systems, reliability, legal issues associated with trans-border data flow, and adherence to open standards were among many issues identified as successful Internet strategies (Garden, 2000). Compared to traditional supermarket consumers, online consumers are less price sensitive, prefer larger sizes to smaller sizes (or at least have weaker preferences for small sizes), have stronger size loyalty, do more screening on the basis of brand names but less screening on the basis of sizes, and have stronger choice set effects (Andrews & Currim, 2004). Consumer demand of internet shopping increases with a decreased lead time and service cycles and there is a large demand for internet shopping if consumers are less sensitive to delay in receiving ordered goods than to access time to retail stores (Hsu and Li 2006). In addition, businesses face the following strategic challenges: Scope: The spread of Internet as a successful medium of communication and exchange has broadened the scope of doing business to global market place. The Internet is a global phenomenon in which fortune will favor truly global players. Substantial market shares within one set of territorial or market boundaries have started to become meaningless in a global context. On the other hand, niches unsustainable within purely domestic markets become viable in an electronic networked environment. Practicality: The strength of nearly all brands ultimately lies in their physical presence. It is unclear if conventional market modeling and ideas, notably brand names, can be transferred wholesale to an electronic channel. Consumers test the values of those brands whenever they enter a shop to purchase an item. Even though brand is a combination of intangibles - images, reputation, and word of mouth - these are, in effect, made tangible through the shopping experience and the service encounter. So, to what extent can an established brand be safely transferred to an electronic channel? What signs do customers use to judge a brand's quality and reputation? Do shoppers continue to rely upon their memory of their last shop visit? In other words, how do businesses seek to recreate added value in an electronic environment? For example, such attributes as quality, value, and convenience might mean quite different things in an electronic channel as well as is entirely different within other cultures at the global level. Also, the Web-based stores need to pay more attention to post-purchase services in their strategy to retain customers (Otim & Grover, 2006).
Reevaluating Visuals in Direct-to-Consumer Print Advertising for Prescription Drugs: An Argument for Active vs. Passive Depictions of Product Benefits
Dr. Amy Handlin, Monmouth University, West Long Branch, NJ
This paper argues that direct-to-consumer advertisers of prescription drugs could strengthen some of their messages by supporting them with active visuals: specifically, by replacing inanimate images or static photos of people with depictions of product users enjoying the use of their time. The author draws from research on time perception, involvement and message strategy to describe this opportunity. Moreover, the opportunity is linked to the increasingly important role of women as decision-makers and information-seekers for the health care of themselves and their families. It is rare to find a direct-to-consumer prescription drug advertisement in print format that is not dominated by an eye-catching visual. According to recent research (Handlin 2005), the visuals tend to occur in at least three primary types: 1. 20% feature landscapes or striking graphic designs. 2. 40% are static images of product users. 3. 40% are depictions of users engaging in work or leisure activity. Obviously, the specific benefit claims supported by these visuals differ by product. Across drug types, however, DTC brand benefits most often fall into the following five categories:1. relief from pain and discomfort.2. freedom from anxiety.3. appearing healthier.4. ability to participate in desired activities.5. enhanced enjoyment of social and family interactions. Key to the argument in this paper is that fact that, based on the author’s review of 45 recent DTC print ads, inanimate visuals or static images of people are almost always used to support benefits in categories 1, 2 and 3. Depictions of activity are limited to ads that focus on benefit categories 4 and 5. Several examples illustrate the point: Celebrex ads feature vignettes which show and describe people enjoying life despite the symptoms of arthritis. These include leisure travelers (“1/2 mile to your gate won’t keep you from leaving on that jet plane” and hikers (“4 rolling hills won’t keep you from taking the road less traveled”) Lantus ads depict go-kart buffs and other outdoors enthusiasts who are able to pursue their passions because the brand enables them to manage their diabetes. Zocor, a cholesterol-lowering medication, depicts a heart disease patient dancing at her daughter’s wedding: the message is “Where will you be when your wedding dress walks down the aisle a second time?”Lyrica, a pain relief medication for nerve pain, features static photos of feet atop a cactus, or on barbed wire. Coreg, a beta blocker for those who have suffered a heart attack, uses a representation of an EKG machine to help convey that patients taking this drug can feel less anxious about a second attack. Astelin ads promote the benefits of this antihistamine spray (relieves congestion, itchy/runny nose, sneezing) with pictures of environmental irritants and seasonal allergens. Humira, for control of rheumatoid arthritis, features a raised hand that is obviously crippled by the effects of the disease. This pattern raises an interesting question. Is there an opportunity for DTC advertisers to better motivate health care consumers by switching to a higher proportion of active visuals – in other words, by more frequently depicting people deriving pleasure from the use of their time? More specifically, is sufficient attention being paid to quality discretionary time as a product benefit? This paper will draw from research on consumer perceptions of time to suggest that its characterization as a discreet benefit offers a good – and underutilized – fit with DTC marketing. “Discussing the effect of time perception on consumer research…is like the proverbial ‘man from Mars,’ who sees an individual spend money freely and leave large tips, and then destroy a soda machine because of the loss of a dime,” wrote Graham in a 1981 review of literature on time perception. Because there is no real consensus on – not even a common definition of – the meaning of time as a consumer resource, it is discussed in consumer research primarily as a constraint or opportunity cost. Time has been considered in terms of duration of travel, consumption and purchasing; temporal constraints on shopping activities; foregone income; duration of attention paid to advertisements; extensiveness of information-seeking or decision-making behavior; and as an impetus to brand loyalty. But few would dispute the assertion that even if time has no standard definition in the language of consumer behavior, it is a powerful motivator. Most writers assume that the market for convenience goods and time-saving services can be fueled indefinitely by consumers’ explicit desire to conserve time – despite the fact that those goods and services are sometimes of suboptimal usefulness or quality (Kotler 1982; Jacoby, Szybillo and Berning 1976). Several researchers have called for new models of consumer behavior that explicitly “recognize the importance of time resources in determining needs or motives of the consumer as well as the effects of such needs…on the processing of information by the consumer.” (Voss and Blackwell 1975; Engel, Blackwell, Kollat 1978). Some research has suggested that an individual’s attitude toward a particular allotment of time in his life wholly determines his later valuation of that time – whether he believes that it has been put toward a worthy purpose, or simply robbed of potential for other uses (Feldman and Hornik 1981; Schary 1971). Psychologist Paul Fraisse (1963), in an early study of the psychology of time, stated this directly: “The perception of duration is a function of our attitude.” Clearly, time during which one feels healthy, and is able to engage in desired activities and interactions, will be highly valued by consumers who would otherwise be restricted by infirmities or ailments. Some consumers may perceive time as a resource not unlike money, to be accrued and exchanged for specific utilities. This view arises from the Becker household production function model of buying behavior (1975), which suggests that time can be bought and sold; that households combine time with goods and services to produce more valuable end-products; and that individuals will reduce their uses of time as its costs rise relative to goods. The Becker model thus provides theoretical underpinnings for the argument that time can be offered for sale like any tangible product benefit. Importantly, however, discretionary time in a person’s life is clearly unique, not interchangeable with other benefits. This uniqueness in itself has value, as suggested by the exchange theories of Foa and Foa (1974) and Brinberg and Wood (1983). These researchers argued that consumers make choices about what to exchange, and how much, along two dimensions:
Significance of Leadership Style and Gender Upon Adeptness for Engaging in Organizational Innovative Initiatives
Dr. James L. Morrison, University of Delaware, Delaware
G. Titi Oladunjoye, Albany State University
Dale Rose, Albany State University
Based on the findings of this study, it may be concluded that a combination of gender and leadership style has a significant impact upon the process of engaging others to innovate. However, it terms of individual initiatives to innovate, gender and leadership styles was less of a significant factor. In addition, research findings indicate that male and female perceptions of gender adeptness in applying leadership skills towards promoting innovative efforts are somewhat different Do male and female leaders in organizations in the private sector differ in the styles they adapt for planning and implementing innovative practices as part of their daily operational responsibilities? This question has been the focus of considerable research over the past several decades (Cooper & Kleinshmidt, 1998; Amabile, 1983 ). In this regard, research has resulted in some contradictory explanations that delineate gender similarities and differences in leadership styles. Some scholars have concluded that leadership styles of males and females do different significantly (Gillian, 1982; Hare, 1996) while others argue there are no differences (Dobbins & Platz, 1986; Klenke, 1993). This research focuses on how the gender of those in senior management positions impacts upon enhancing innovation in the workplace. Since innovation and success generally go together, this research is an attempt to compare gender perspectives of those in senior leadership positions as to who is more adept in leveraging resources for generating innovative advances in products and services offered by organizations in the private sector. For this research, senior management is perceived as a team of individuals at the highest level in an organization, such as president, vice-president, chief financial officer, and others. These are individuals who have the day-to-day responsibilities of leading a corporation. As a basis for study, it is argued that these leaders have great impact upon how innovation is promoted as a workplace expectation. To become innovative typically involves risk-taking on the part of those engaged in creating something new and different. A key challenge in innovation is maintaining a balance between risk-taking and process and/or product innovation. In this regard, process innovation tends to involve an operational model which results in improving efficiency in production while product innovation typically results in adding value to a commodity or service. Therefore, in this study, innovation is defined as the process of creating something new to enhance the process of work and/or to add value to current products and services offered to the public. Therefore, creativity and innovation in this study are closely related. Based upon scholarly research conducted in the public sector, Challenging Women: Gender, Culture and Organizations (Maddock, 1999) offers an intriguing assessment of organizational forces that assist in enhancing change. In this instance, Maddock delineates a linkage between gender, innovation, and organizational transformation. However, Eagly and Johnson (1990), after reviewing studies completed between 1961 and 1987, reported mixed results relating to gender impact upon leadership styles adopted. They found a considerable number of studies that concluded that female and male leaders did not differ in leadership styles adopted to enhance worker productivity. However, they also reported a series of studies that suggested that leadership styles of females tend to be more democratic and less autocratic than their male counterparts. In terms of gender impact, results from previous research suggest males are more rational, assertive, and direct while females are more sensitive, warm and tactful (Deaux & Lewis, 1984; Williams & Best, 1982). Along a similar line of thought, males have been identified as being more autocratic and task-oriented while females more nurturing and democratic (Bakan, 1966; Tannen, 1990). In this regard, some researchers align transformational leadership (typically associated with consensus building, team building, networking) to a feminine leadership style (Helgeson, 1990; Loden, 1985) while others such as Hackman, Furniss, Hills and Patterson (1992) demonstrate transformational leadership as gender-balanced. Moreover, traditionally, studies of creativity have focused primarily on the individual and tend to assume that creative people perform in isolation. However, the reality is that creativity is often the result of teamwork (Luecke & Katz, 2003). Bruce Scott (1994) argues that leaders can effect who will be creative and in what types of projects creativity is promoted. In this regard, Scott suggests that both leader role expectations and leader-member relationships influence innovation. He also suggests that innovation and creativity are contingent upon teamwork whereby gender is a significant factor in the frequency in which innovative efforts are initiated and carried out. Kanter (1977) argues that females who are relatively few in number in an organization are likely to be perceived differently simply because of their lack of visibility. As a result, they often change their leadership style to fit in better with their male counterparts. However, the literature appears to support the position that gender differences in leadership styles have been significantly reduced within the last 20 years (Van Engen & Willemsen,, 2000; Johnasson, 2004). This research investigates whether males and females in leadership positions are equally adept in utilizing their skills for enhancing innovation in their organizations In this regard, the two hypotheses tested are: 1. The perceptions as to individual initiatives undertaken to enhance innovation will not vary by gender of the leader and leadership style. 2. The perceptions as to adeptness to enhancing innovation will not vary by gender of the leader and leadership style. Therefore, the issue addressed here relates to how leaders promote a working environment that encourages employees to undertake initiatives to innovate not only to enhance worker productivity but to develop new products and/or services. Specifically, do male and female leaders in senior management perceive existing practices for promoting innovation in the workplace similarly? Does the leadership style adopted have a significant impact on the effectiveness of the leader in accomplishing such a goal? The answer to these two questions should provide insight as to how adept leaders in organizations in the private sector are actually perceived by each other for achieving such objectives. Identifying similarities and differences in skill levels of leaders in senior management can result in an improved strategy for putting together a better target training program to enhance innovation in a working environment.
The Short-Run Performance of Initial Public Offerings: An Empirical Study for Thailand
Dr. Chaiporn Vithessonthi, Mahasarakham University, Thailand
In this paper I investigate stock returns of initial public offerings (IPOs) in Thailand between 2001 and 2005. Based on a sample of 43 IPO firms listed on the Market for Alternative Investment (mai) between 2001 and 2005, I find mixed results: although the average market-adjusted initial return on the first trading day is 13.72 percent and significant, the average market-adjusted initial return varies across year of issuance. While the average adjusted initial return for IPOs in Thailand is substantial, it is smaller that the average excess returns for IPOs in other countries. Finance theory seeks to understand the valuation of initial public offerings (IPOs). To this end, research has explored the extent to which the firm’s valuation of its common stock is in line with that of investors. To date, the empirical literature examining the valuation of initial public offerings have produced similar results (Drobetz, Kammermann, and Wälchli, 2005; Kunz, and Aggarwal, 1994; Loughran, and Ritter, 1995; Ritter, 1991; Sapusek, 2000). The overall conclusion of the literature is that IPOs are underpriced; i.e., the offer price of IPOs is on average lower than the corresponding first-day market closing price (Drobetz, Kammermann, and Wälchli, 2005; Sapusek, 2000). Almost all previous studies examined initial public offerings in developed countries. The prevalence of these studies raises the question of whether the similar results would be obtained in developing countries, especially smaller ones in Asia. Initial public offering occurs even more frequently in good economic times because of firms’ desire to finance its expansion. With the enhanced globalization of financial markets, the investment in non-US financial equity markets has substantially increased. In particular, the rapid growth of emerging market economies (China, India, etc.) has attracted not only domestic investors but also foreign investors into the equity markets. The present study attempts to add to this literature by examining stock price performance of initial public offerings, using data on Thai IPO firms. Thailand represents an interesting example of an emerging market economy and the use of Thai data may be an important contribution to the understanding of initial public offerings in particular and in emerging market economies in general. As a member of the Association of South-East Asian Nations (ASEAN) since 1967, Thailand has achieved considerable progress during 1980s in terms of economic growth, a fact reflected in its commonly known as the fifth Asian tiger prior to the Asian financial crisis in 1997. A prior empirical work on the IPOs in Thailand, based on a sample of 150 IPOs listed on the Stock Exchange of Thailand between 1985 and 1992, reports that the initial return for Thailand is 63.49 percent (Allen, Morkel-Kingsbury, and Piboonthanakiat, 1999). The purpose of this study is to examine the valuation of newly issued shares; I am especially interested in the outcomes of initial public offerings in Thailand where information asymmetry is likely. In particular I examine the underpricing of initial public offerings listed on the Market for Alternative Investment from 1999 to 2005. The Market for Alternative Investment (mai), which is the second organized stock exchange in Thailand, has been operational since 1999. In several aspects, The Market for Alternative Investment is analogous to the Nasdaq in the US market. Drawing on a sample of 43 Thai IPOs listed on the Market for Alternative Investment from 2001 to 2005, I find considerable evidence of the underpricing cost of the IPOs. The results demonstrate that the average initial return of IPOs is 13.88 percent and statistically significant. My results support the notion that initial public offerings in emerging market economies are also underpriced. These findings are consistent with prior studies examining IPOs in other countries such as US (Welch, and Ritter, 2002), Switzerland (Drobetz, Kammermann, and Wälchli, 2005; Welch, and Ritter, 2002), or Germany (Sapusek, 2000). The next section reviews the literature on initial public offerings. Where possible, the findings are related to initial public offerings in Thailand. Because there has been limited work in the area of initial public offerings in developing countries, this section summarizes some of the major findings regarding initial public offering in developing countries. The subsequent section presents my sample, data, and methodology used in this study. The next section discusses the empirical results for the short-run stock price performance of IPOs after going public. The final section suggests some avenues for future research and concludes the paper. Narrowly defined, an initial public offering occurs when a firm gets its common stock listed on the organized exchange market and sells its shares to the public for the first time. Conceptually, an initial public offering is a selection among alternative financing modes by which the firm and the public market can transact. Thus, an initial public offering increases a firm’s financial capital and consequently affects firm performance. Prior works on initial public offering (Ritter, 1991; Schultz, 2003; Welch, and Ritter, 2002) report evidence that the initial public offerings in several countries are underpriced. For example, Welch and Ritter (2002) reports that, based on a sample of IPOs in the U.S. between 1980 and 2001, the average initial return of IPOs is 18.6 percent. A study of IPOs in the UK by Levis (1993), shows that, based on a sample of 712 IPOs in the UK in 1980-1988, average adjusted initial return is 14.3 percent and statistically significant different from zero. Kunz and Aggarwal (1994) study IPOs in Switzerland report that IPOs are underpriced using a sample of 42 Swiss IPOs from 1983 to 1989. Consistent with prior results, Drobetz et al. (2005) report that the average market adjusted initial return for a sample of Swiss initial public offerings from 1983 to 2000 is 34.97 percent. For IPOs in Israel, Amihud, Hauser, and Kirsh (2003) report that a six-day initial excess return for IPOs in Israel between 1989 and 1993 is 11.99 percent. Research on initial public offerings suggests that there are costs for firms that choose to go public. One of which is the underpricing cost. One possible explanation for the underpricing cost, proposed by Rock (1986), is that the extent to which initial public offerings are underpriced depends on the ex ante uncertainty about a true value of an IPO firm.
Exploring the Value Profiles of Business Students in South Africa
Professor Miemie Struwig, Ph.D., Nelson Mandela Metropolitan University, South Africa
This study explores the personal values of business students in South Africa. Personal values are general standards by which individuals formulate attitudes and beliefs according to which they behave. The values of these students are important to study as they represent the future leaders of businesses and other organizations in society. The Country Corruption Assessment Report South Africa (2003) indicates that there is no doubt that South Africans perceive there is a lot of corruption and that it is one of the most important problems which should be addressed. The business sector in particular (62%) believes that corruption has become a serious issue in business. The purpose of this study is to explore the South African business students’ personal values to determine what interventions may be necessary before they enter the world of work.The Personal Value Statement (PVS) instrument was used to explore the personal values of business students at the Nelson Mandela Metropolitan University in South Africa. The Personal Values Statement instrument developed by Allport, Vernon and Lindzey (1960) was adapted and only five value types, namely political, aesthetic, social, theoretical and economic were included in the South African study. This study further investigates differences that exist between female and male business students’ personal values. Researchers in America have reported different levels of agreement between men and women in their value priorities (McCarthy, 1990; Rokeach & Ball-Rokeach, 1989). In another study, however, Simmons and Penn (1994) reported a high level of agreement in the value priorities of male and female students. The findings of this explorative study indicated that sex-based differences in the value profiles of business students in South Africa are in line with those of other global studies. Male and female students of the 1960s were different in terms of five of the six evaluative attitudes. Among the students of the 1990s, male and female differences were limited to only three evaluative attitudes. The finding of this study is consistent with the 1990s’ global findings and of researchers such as McCarthy (1990) and Simmons & Penn (1994), who have noted "a high level of agreement" in the value priorities of male and female students. The findings of this study provide some insights useful to both business educators and business managers as they grapple with changes in personal values and ethical standards in the workplace. As personal values are such a powerful influence on human behaviors (Freeman, Gilbert & Hartman, 1988), information about specific personal value systems could help business managers understand the motives behind different ethical behaviors of employees. In addition, knowledge of the differences between the value profiles of male and female business students could be useful to both academicians and practitioners in understanding gender-based differences regarding ethical issues. The results of the study indicated that students in this sample ranked social or humanitarian values highest, theoretical values second highest, economic values third, political values fourth and aesthetic values lowest. These findings are generally consistent with the findings of researchers who have examined values in the general student population in other countries over the past decade. To promote ethical behavior in business, many universities and in particular business schools are increasingly requiring their students to take a course in business ethics. This paper provides educators with some ideas of what may be required. The focus is on business students as it is argued that the majority of them will be directly exposed to ethical situations in their future employment. Research on value profile and trends among students and business students around the world is well documented. Studies such as those by Giacomino and Akers (1998), Allport, Vernon and Lindzey (1960), Karassavidou and Glaveli (2006), Kumar and Nonis (1997) and Chan and Leung (2006) are among many that investigate the value profile of students. In South Africa, the lack of research on value profiles of business students becomes critical in view of the fact that declining moral values continue to disturb the South African business community and the public perceptions of the country’s business are still a concern for the business community and higher education. An understanding of the value profiles of business students would therefore give some insight to business educators and managers about the current state of the values of business students. The information would also be useful in devising instructional strategies on ethical matters. The values of these students are important to study as they represent the future leaders of businesses and other organizations in society. The Country Corruption Assessment Report South Africa (2003) indicates that there is no doubt that South Africans perceive there is a lot of corruption and that it is one of the most important problems which should be addressed. The business sector in particular (62%) believes that corruption has become a serious issue in business. The purpose of this study is to explore the South African business students’ personal values to determine what interventions may be necessary before they enter the world of work. This study provides empirical evidence to help understand the personal values of university business majors in South Africa. Research (Sweeny & Fisher, 1998) supports the premise that one’s personal values influence behavior, including managerial and corporate strategy decisions. In the following paragraphs values are firstly defined where after previous research on the value profile of students is outlined. A discussion on the research method and results and conclusions follows the theoretical outline. The concept personal values reflects the interest of several disciplines. This study will focus on the views of Psychology which examines values from the standpoint of attitudes and personal motives. The subject of personal values has been a topic of research and discourse in the social sciences for many years. The studies of Allport, Vernon and Lindzey (1960) and Rokeach (1973) have been the most influencial studies of values. Both Allport et al. (1960) and Rokeach (1973) used Spranger’s theory that describes the basic value systems (Spranger, 1929). Rokeach (1973: 5) defined a value as ``an enduring belief that a specific mode of conduct or end-state of existence is personally or socially preferable to an opposite or converse mode of conduct or end-state of existence.'' Values can be distinguished from similar concepts such as attitudes because the former are relatively enduring beliefs that transcend specific objects or situations, whereas the latter are focused on a specified object or situation. Because they occupy a central position within one's cognitive makeup, values may be viewed as the determinants of specific attitudes and behavior (Rokeach, 1973). Swartz (1992:2) defines values as “desirable goals varying in importance that serve as guiding principles in peoples’ lives”. Eaton and Giacomino (2001) concluded that regardless of the meaning one chooses for values, the common theme among those who have conducted research on values is that values influence behavior. Fritsche (1995) also linked values to behavior. Callaghan (1996) indicates that values underpin a person’s priorities, decisions and behavior.
Mediating Effects of Job Characteristics on Job Satisfaction and Organizational Commitment of Taiwanese Expatriates Working in Mainland China
Dr. Sheng Wen Liu, Transworld Institute of Technology, Taiwan, R.O.C.
Dr. Ralph Norcio, Lynn University, FL
population of 1.2 billion, mainland China has become a major target country
for many foreign companies looking to expand their businesses because of its
inexpensive labor and its large market. Since 1987, many manufacturers in
Taiwan have moved to mainland China to reduce labor costs. In 2006, there
were 70,256 companies from Taiwan operating in mainland China with fiscal
expenditures exceeding US $42.81 billion dollars (Ministry of Commerce of
the People’s Republic of China, 2006). If Taiwan’s foreign direct
investment (FDI) and offshore investment expenditures were included, Taiwan
would have had the second largest FDI in mainland China (Department of
Investment Services Ministry of Economic Affairs, 2006). The purpose of
this study is to investigate the mediating effects of job characteristics on
job satisfaction and organizational commitment of Taiwanese expatriates
working in mainland China. Through a snowballing sampling plan, the entire
accessible population of 6,156 Taiwanese expatriates was invited to
participate by e-mail – resulting in a valid sample of 389 responses. The
methods of data analysis used in this study consisted of exploratory factor
analysis (EFA), internal consistency reliability, and moderated multiple
regression (MMR). Findings indicated that (a) job characteristics mediated
the positive impact of intrinsic job satisfaction on affective commitment;
and (b) job characteristics mediated the negative impact of extrinsic job
satisfaction on affective commitment and normative commitment. A further
study to replicate the research in different countries in order to explore
the relationships among job characteristics, job satisfaction, and
organizational commitment of expatriates was recommended. Since mainland
China instituted its “Open Door” policy, there has been a flow of foreign
direct investment into the Chinese mainland that has resulted in a
substantially increased number of foreign business executives working there
(Selmer, 1998). Moreover, for many western organizations, mainland China
has become a very important country in which these organizations were
willing to expand their Asian production and marketing operations. It is
the largest recipient of international foreign direct investment in the
world (Tung & Warm, 2001; Bureau of Foreign Trade of Taiwan, 2006).
According to the Ministry of Commerce of the People’s Republic of China
(2006), the value of its 590,105 foreign investments is in excess of US
$678.24 billion. Therefore, organizations have increasingly realized that
it is necessary not only to have expatriate employees who are willing to
live and work in mainland China, but also to have a staff that identifies
closely with and supports the organization. In addition, more Taiwan
manufacturers are expanding to mainland China for less expensive labor.
Before 1991, there were only 3,884 Taiwanese companies with fiscal
expenditures exceeding US $0.86 billion. However, in 2006, there were
70,256 companies from Taiwan operating in mainland China with fiscal
expenditures exceeding US $42.81 billion (Ministry of Commerce of the
People’s Republic of China, 2006). Moreover, Taiwanese companies form the
seventh largest foreign direct investment (FDI) in mainland China. Hong
Kong ranks first, followed by the United Kingdom, Japan, Korea, Germany, and
the United States. If Taiwan’s FDI offshore investment expenditures were
included, Taiwan would have the second largest FDI in mainland China
(Department of Investment Services Ministry of Economic Affairs, 2006).
Therefore, it is important for Taiwanese human resource management scholars
to undertake such a study based on this issue and for Taiwanese managers to
learn and understand Taiwan’s expatriate employee experiences. The construct
most often studied to explain employee attachment or loyalty to an
organization is organizational commitment (Sommer, Bae, & Luthans, 1996).
Two general forms of organizational commitment have been defined by
theorists: moral and calculative. Moral orientation is the attitude in the
form of an attachment between an individual and an organization. This is
attitude-based commitment that includes identification, involvement, and
loyalty. It tends to make the employee desire to maintain membership in the
organization and reduces their desire to leave as they have a strong
identification with the organization’s goals and values (Mowday, Porter, &
Steers, 1982; Park, Gowan, & Hwang, 2002). The calculative perspective is
based upon exchange theory that explains organizational commitment as an
investment that people make when they join an organization. After
membership, all actions taken by the person are considered to justify the
act of joining (Barge & Schlueter, 1988; Sager & Johnstone, 1989). In
addition, organizational commitment could be linked to employees’ attitude
and behavior, such as intention to leave, absenteeism, actual turnover, and
customer service quality (Hartmann & Bambacas, 2000; Khan, 2005; Malhotra &
Mukherjee, 2004). Employees who have a higher level of commitment to their
organization will exert higher levels of effort toward the organization, and
identify with the organization’s goals (Scholl, 1981). Meyer and Allen
(1984) conducted two studies to test two-group samples with 64 introductory
psychology students and 130 employees from several administrative
departments of a large university and to test the side-bet theory. From
their research findings, they proposed a model of organizational commitment
(Meyer & Allen, 1991) in which organizational commitment is conceptualized
in three ways: affective commitment, normative commitment, and continuance
commitment. Affective commitment is a sense of attachment and a feeling of
belonging to the organization. Normative commitment is a feeling of
obligation on the part of employees to maintain employment. Continuance
commitment is an awareness of costs associated with leaving the organization
or awareness of lack of alternatives (Hartmann & Bambacas, 2000; Tan &
Akhtar, 1998). Job satisfaction was defined by Locke (1976) as an emotional
state resulting from job experiences with the result that a worker feel
positively or negatively about his or her job. Robbins and Coulter (1996)
stated that job satisfaction is about the general attitude of employees
toward their jobs. Employees’ attitudes are likely to reflect their job
satisfaction. Furthermore, job satisfaction has been conceptualized in many
ways. It is not a unidimensional concept, and is of wide interest to people
employed in organizations, and also to those who study organizations (Zangaro
& Soeken, 2005). One of the popular conceptualizations was proposed by
Locke (1976), intrinsic, extrinsic and general job satisfactions model that
is based on Herzberg’s motivation-hygiene theory (Locke, 1976; Namann,
1993b). Intrinsic satisfaction is obtained from performing the work and
experiencing feelings of accomplishment, self-actualization, and identity
with the job. Extrinsic satisfaction is obtained from the reward bestowed
on an individual by superiors, colleagues or the organization, and can take
the form of compensation, recognition, or advancement. General satisfaction
is an aggregate of satisfaction with various job activities or combination
of several measures of overall satisfaction. No prominent
conceptualizations of job satisfaction have been found in the literature,
except that the use of the intrinsic, extrinsic and general distinction
seems to be the most appropriate concept for international research (Naumann,
1993a; Naumann, 1993b; Zangaro & Soeken, 2005).
Strategic Analysis: Blockbuster Case Study
Yan Xie and, Nova Southeastern University, Fort Lauderdale, FL
I-Hsiang Lin, Nova Southeastern University, Fort Lauderdale, FL
This research intends to analyze the strategy situation of Blockbuster – a global entertainment provider of in-home movie and game, and to provide strategy suggestions for Blockbuster. Case analysis is used in this paper. First, this paper reviews Blockbuster’s background information, including mission, vision, and current strategies. Then it analyzes Blockbuster’s internal (by Resource-based view of the firm) and external environments (by the Porter’s five forces). Next, it uses the SWOT analysis to identify the gaps between current situation and the vision & mission. Finally, we propose an action plan to close the gaps. Blockbuster, Inc (NYSE: BBI) is a global provider of in-home movie and game entertainment, with over 8,000 stores throughout the Americas, Europe, Asia, and Australia. It is headquartered in Dallas, Texas. Currently, Blockbuster operates the competitive home video and home video game industries which include in-home movies, such as theatrical movies, direct-to-video products, etc. and game entertainment offered by traditional retail outlets, online retailers, cable and satellite TV providers (Form 10k, 2006). Blockbuster identifies its vision as “to be a complete source for movies and games” and summarizes its mission as: “to grow our core rental business while continuing to use our brand, our massive database, our stores and our studio relationships to deliver an even broader array of home entertainment to both existing and new audience” (Form 10k, 2006). Blockbuster tries to satisfy customers’ needs of renting movies and games, as well as buying and trading them (Antioco, 2004). Blockbuster’s current strategies are focusing on the revenue sharing agreement, subscription rental program, development of Blockbuster Online™, elimination of late fees, game pass and game rush, and reducing employees in order to reduce operating costs. (1) The revenue sharing agreement states that Blockbuster keeps 60% of rental revenues and pays the rest to the studio owners, who allow Blockbuster to obtain the videos from original purchasing cost of $65 per video down to only $6 per video. (2) Blockbuster online™ (aka Total Access program) allows online subscribers to have unlimited rentals by paying a $19.99 monthly subscription fee. Customers can rent three movies at a time with up to five in-store exchanges for free movies or discounted games (Blockbuster.com). (3) Blockbuster eliminated late fees in 2004. Customers are permitted a one week grace period after the due day to return the products. However, if customers exceed the grace period, they have to purchase the product or pay a restocking fee of $1.25 or more (money.cnn.com). (4) Game Pass allows the customers to play the pre-determined amount of games in stores based on monthly charge. Game Rush encourages customers to rent or trade game hardware and software at stores. (5) In order to cut cost, in recent years Blockbuster deferred annual merit pay, reduced employees’ hours, controlled inventory purchases, and closed some unprofitable stores (Sweeting, 2005). In August 2007, Blockbuster acquired Movielink, LLC who offers customers the ability to legally download entertainment content for rental and purchase. The acquisition gives Blockbuster access to one of the largest libraries of downloadable movies (Anonymous, 2007) and enables Blockbuster to provide customers the ability to download movies through computers, portable devices, network, and approved set-top boxes. As of 2006, financial situations of Blockbuster are as follows: profit margin is -1.69%, return on asset is -0.73%, return on equity is -13.29%, debt to equity ratio is 1.25 (Yahoo Finance, 2007). From these ratios, it reflects that Blockbuster has low management effectiveness. Now the firm is facing the challenges of how to regain profits and reduce costs effectively. After reviewing Blockbuster’s current situation, internal and external analysis’s were performed to address critical issues which affect Blockbuster’s ability to achieve its goals, and to identify the gaps of between current strategies and the firms vision and mission. Finally, based on the SWOT analysis, a suggestive list of actions is defined for Blockbuster to pursue. Internal analysis seeks to analyze the organizational resources of Blockbuster. The Resource-Based View of the firm (RBV) proposes that the organization resources can be categorized into tangible assets, intangible assets, and organization assets. Tangible assets include production facilities, raw materials, financial resources, and so on. Intangible assets are brand names, organizational moral, technical knowledge, and experience. Organizational capabilities, the most critical one, refer to people and process, ability and ways of combining assets (Pearce & Robinson, 2005). The firm can achieve sustainable advantage if the resources of the firm are: valuable, rare, imperfectly imitable and non-substitutable (Barney, 1991). In 2006, Blockbuster owned inventory worth $801,000,000, property, plant & equipment worth $580,100, 000, intangible assets of $27,500,000, and total assets of $3,137,200,000. Compared to Netflix, Blockbuster’s major competitor, whose total assets are $608,779,000, Blockbuster has a relative advantage in tangible assets (Yahoo Finance). Blockbuster purchased inventory from studios on a title-by-tile basis through purchase orders and revenue-sharing agreements. Approximately, 81.8% of inventory purchased were under revenue sharing agreement during 2006. As emergence of DVD, the home video market can sell the DVDs directly with a very low price. Blockbuster can purchase large amounts of DVDs with or without revenue sharing. Therefore, the significance of revenue-sharing agreements has been declined (Form 10k, 2006). Blockbuster’s distribution headquarter is located in McKinney, Texas. The products are shipped to delivery agents that in turn deliver them to stores. The deliver agents and stores are strategically throughout the US. In addition to the McKinney distribution center, Blockbuster has 35 distribution centers throughout the US to support online subscription delivery. Compared to some competitors who purely rely on third-party distributors, the distribution centers pose a great advantage for Blockbuster because they help process and distribute large amount of products at a low cost. Currently, Blockbuster is considering closing some unprofitable centers to reduce operating costs (Form 10k, 2006). However, the net income was negative in 2004 and 2005, it indicated that contradict to the management expectation, the elimination of late fee did not stimulate enough subscriptions to offset the losses from the elimination of late fee (Arnold & Gruenwedel, 2005).
How to Support Entrepreneurial Learning Through an Online Pedagogical R&D Project? - Case: Continuator Entrepreneurship
Dr. Irja Leppisaari, Central Ostrobothnia University of Applied Sciences, Finland
Dr. Marja-Liisa Tenhunen, Central Ostrobothnia University of Applied Sciences, Finland
Riina Kleimola, Central Ostrobothnia University of Applied Sciences, Finland
Ownership transfers in enterprises will dramatically increase in Finland over the next several years. This creates new challenges for educational programmes directed at entrepreneurs continuing a business and realised collaboratively between higher education and the workplace. Sharing of tacit knowledge and establishing a skilled network are central challenges in continuator entrepreneurship and demand new practices in education and professional development. This study employs a design-based research model to investigate continuator entrepreneurship, and also makes use of authentic learning and online mentoring planned and implemented collaboratively between working life and higher education representatives as an online pedagogical R&D project. The collaborative development aims to promote entrepreneurial learning, support the ownership transfer and find potential continuators. During the next 5-10 years, a considerable proportion of the population of Finland will reach retirement age. Approximately every fifth SME in Finland, about 40 000 enterprises, expects the ownership of their company to change over the next five years. Finding a continuator is, however, a challenge for entrepreneurs intending to hand over leadership of their company; up to 46% of SMEs planning ownership transfer consider this issue problematic. (Pk-yritysbarometri, 1/2007; Peltoniemi, 2007.) In ownership transfer, the ownership of an enterprise can be retained within the family, passing on to the next generation. Alternatively when an entrepreneur retires or leaves the business for other reasons, ownership of the company can be sold to a third party, an individual or company outside the family (see Peltoniemi, 2007). Entrepreneurs who gain ownership and leadership of a company after a change and continue the company’s operations are known as continuator entrepreneurs. Successful ownership transfer is an important objective in business life, in the realisation of which continuator entrepreneurial training plays a central role. Educational needs arising from retirement and ownership change should be identified well in advance, and continuator skills development models be included as a flexible and meaningful part of the ownership transfer. According to the EU Green Paper (2003) alternative learning forms, such as distance education and mentoring, in which entrepreneurs learn from each other, deserve greater attention. Changes in working life also set challenges for Finnish institutions of higher education, which, in line with national directives (see e.g. Ministry of Education 2007), aim to increase the work-based orientation and local impact of their courses and respond to acute working life needs through flexible educational solutions. This requires universities to redefine their conceptions of learning and to develop new kinds of pedagogical practices. Combining learning and work and integrating theory and practice are at the core of these developments. Universities will need to develop partnerships with employers as they negotiate the structure and content of programmes. Reforming universities in terms of workplace learning will involve negotiation between academics and practitioners to benefit both practice and theory. (Boulton-Lewis, Pillay, and Wilss, 2006.) A focus of particular interest in this research paper is how these two needs, that is, 1) the need of working life to create new professional development models and 2) the need of higher education to develop a more working life based educational programme, can meet and enrich each other through a collaborative learning partnership. An interesting context is provided by a continuator entrepreneurship and related education as an online R&D project. In what ways do authentic work practices and online mentoring models and design-based research methods support educational programme design and implementation? Can they be deployed to support potential continuator entrepreneurs and ease the ownership transfer process? The National Knowledge Society Strategy (2007-2015) in Finland emphasises the development of staff training through collaboration between higher education and business. Measures which are specifically mentioned in the strategy include developing SME staff skills and encouraging work communities to adopt new learning methods. Furthering the professional growth of working life practitioners should also be retained as a central focus of higher education societal impact. Working life orientation is especially emphasised in universities of applied sciences, whose function is to offer courses based on the demands of developing working life, support the development of expertise, and engage in applied research and development work that serves both working life and regional development. The quality of teaching at university of applied sciences is raised by increasing the institutions’ working life orientation and making working life based R&D a more integral part of teaching. (Ministry of Education, 2007.) Institutions of higher education are investing more heavily in raising entrepreneurial motivation. Strengthening entrepreneurial learning requires, however, new openings and models in the collaboration between higher education and working life, which deploy modern information and communication technology in meaningful ways. The significance of flexible web-based educational solutions is especially emphasised in the SME sector, where the pace of work in many enterprises is so fast that it is impossible to find time for separate, formal training outside the workplace. Students, teachers and working life representatives can be brought together through web-based instruction independent of time and place. This enriches a student’s learning opportunities and strengthens interaction between theory and practice (Helenius and Leppisaari, 2004). Online pedagogical solutions tailored for SME needs should in fact be actively developed, so that through these entrepreneurs can develop their skills more effectively and flexibly. Through the development of R&D ventures that meet working life needs, which promote the sharing and collecting of tacit knowledge and interaction among entrepreneurs, corporate growth factors and competitiveness can be supported and requisites for growth entrepreneurship met.
International Diversification and Firm Performance: An International Analysis
Dr. Alfredo M. Bobillo, University of Valladolid, Spain
Dr. Felix Lopez Iturriaga, University of Valladolid, Spain
Dr. Fernando Tejerina Gaite, University of Valladolid, Spain
The internal and external competitive advantages of firms across different phases of internationalization depend on the resources used by industries for their financial development and growth. These advantages, as well as the influence of internal owners, facilitate the access of firms to foreign markets. This study analyzes the relationship between the degree of international diversification and firm performance in Germany, France, the U.K., Spain and Denmark. Our results support a curvilinear relationship between the degree of internationalization (hereinafter DOI) and firm performance that is articulated in three stages who are concerned about industry reputation, technological and distribution barriers and also showing high transaction cost. These findings point to a cyclic process in a firm’s international expansion, where overcoming such barriers and developing governance and coordination mechanisms to minimize transaction costs become the main challenge the firm must overcome in order to compete at the worldwide level. The globalization of economic activity has allowed firms to rapidly shift their activities in the search for new markets. In the international business arena, the international diversification-firm performance relationship is generally assumed. Vernon (1971), Kogut (1985) and Dunning (1993) suggest a positive relationship between the extent of multinationality and the firm’s economic return on sales when a firm exploits its ownership advantages and its specific assets in foreign markets. Contextual factors have led to the eradication of tariff barriers and encourage the most competitive firms to exploit market imperfections and to use their competitive capabilities to start up new ventures in foreign markets, thus improving their performance and outcomes. In the same way, the growing increase of inward activities provides the firm with opportunities to develop new relationships with other foreign firms. It also facilitates the knowledge of new techniques of international trade and the use of different operating modes that lead the firm to reach a better position from which to perform its foreign operations (Karlsen et al., 2003). Diversification represents a growth strategy and has been shown to have a great impact on firm performance (Chandler, 1962; Anssoff, 1965). In later research, however, various studies looking for a link between performance and international diversification show divergent results. In some cases, results evidence a positive linear relationship, whereas in others they show a negative linear, U-shaped or even inverted U-shaped relationship. How to explain these apparently conflicting results? Some authors like Contractor et al. (2003) suggest that the absence of a quadratic term in the equation would explain why initially only a linear function was found. Another reason might be the fact that the data used captured only part of the sigmoid (S-shaped) function. These authors propose a model in three stages that explains the relationship between performance and international diversification for service firms. Similarly, Capar and Kotabe (2003) build a linear model to justify the positive linear effect that international diversification has on ROS, and then they present a curvilinear model with a significantly higher explanatory power, since it introduces a squared term of degree of internationalization (DOI). Likewise, Lu and Beamish (2004) present a theoretical framework based on three stages (S-shaped curve) for the study of multinationality and performance applied to internationally-operating Japanese firms. Thus, our main goal is to analyze whether the recently contrasted S-shaped relationship between performance and degree of internationalization for service firms might be applied similarly to industrial firms. In addition, our paper has incorporated the characteristics of firms’ ownership and governance structure which, until now, had scarcely been considered in this field. The remainder of the paper is organized as follows. Section 2 describes the hypotheses. Section 3 details the data, methodology and variables that have been used. Section 4 reports the main empirical findings. Section 5 summarizes the conclusions. It would be appropriate to test whether the international diversification or the degree of internationalization (DOI) and firm performance relationships fit the same pattern. In this sense, there are two different kinds of costs: some are a result of entering in foreign markets and tend to decrease during the international expansion, while the governance and coordination costs run in parallel to foreign operations and entries in new markets. The balance between the benefits and costs of this international expansion might explain the hypothetical stages in the relationship between DOI and firm performance. At first, a firm can find it difficult to enter foreign markets and consequently can have to bear the costs associated with exploration and learning. As Benito and Tomasen (2003), and Buckley and Casson (1998) point out, the performance of foreign activities is directly related to the revenues and costs from the ownership, location and internalization of those operations. In our view, a business’s performance in this initial stage will depend on the human behavior that will be shaped by the social, economic and institutional context of the country in which the international expansion takes place. As far as transaction costs are concerned, firms find it difficult to overcome them in the initial stage of foreign operations. Therefore, these costs will at first offset the benefits of international expansion and will delay positive firm performance. Consequently, in this stage, the firm will be able to minimize the governance costs related to inter or intra-organizational operations, seeking the most efficient solution in the long run. Throughout this period, adaptation problems, time and resources spent on supervision, better understanding of legal rules and definition of goals, as well as problems that arise from communication errors, generate a higher transaction cost that offsets positive firm performance in the initial operations abroad. Thus, a negative relationship between ex post costs and the profitability of such operations should be expected. With international expansion, the acquisition of foreign market knowledge reduces the costs associated with these operations. In this new phase, the firm fits the environment better and provides itself with better information in order to reduce the possible supervision costs. At the same time, international diversification allows it to exploit advantages through different markets, thus favoring the development of competitive capabilities in foreign markets. All this will help the firm to minimize costs in contractual relationships and to obtain, along with an increment of foreign sales, a positive performance in this mid-stage of its international expansion.
Analysis of Regional Competition Efficiency of the Hospitals in Taiwan: A Case Study
Ching-Kuo Wei, Oriental Institute of Technology, Taiwan
Mao-Lung Liao, Cardinal Tien Hospital, Taiwan
This research analyzes the regional competition efficiency of the target hospitals in Taipei area of Taiwan by developed Data Envelopment Analysis and finds that in recent years, the performance of the target hospitals the medical environment of Taipei area is unsatisfying. From 2002 to 2005, the hospitals are only efficient in 2002. After further analysis, we find that the main reason of the inefficiency between input and output of the target hospitals is the improper scale. Besides, after merging, the performance of the hospitals is unsatisfying. Thus, the hospitals must reduce input and increase the output to upgrade the operational performance. In addition, they should find the right development direction to have competitive advantages in Taipei area and fulfill the objective of sustainable operation. In recent years, the medical environment of Taiwan changes gradually. National health insurance was implemented in 1995. Before the implementation, there were over 800 hospitals in Taiwan and two years later, there were only over 500 hospitals (reduced by 35%). In 2002, Global Budget System was practiced and the government led the medical service by budget and controlled the growth of medical expenditure. In 2004, Bureau of National Health Insurance promoted Prominence Project which resulted in the unsteadiness of the revenues and medical revenues. The hospitals thus faced the difficulties and even went out of business. The impact on medical environment significantly influenced the hospital operation in Taiwan. Among the medical regions, Taipei area has the richest medical resources. Thus, the hospitals in Taipei area are in the severe competition. This research aims to analyze the operational efficiency of the hospitals in Taipei area. The research target is a public hospital merged by Banciao Hospital and Sanchong Hospital in 2004 which provided medical and healthy services for the residents in Taipei County. However, since it is in severe competition, only recognizing itself and the rivals can maintain its competitive advantages of operational efficiency and sustainable business. DEA is a non-parametric linear programming model for frontier analysis of multiple inputs and outputs of decision-making units (DMUs, e.g., hospitals), developed by Charnes et al. (CCR model) (Charnes et al., 1978) and extended by Banker et al. (BCC model) (Banker et al., 1984). Detailed introduction of DEA theories is provided by Cooper et al (2000). The CCR model, which assumes constant returns to scale (CRS), and the BCC model allows for variable return to scale (VRS). The input oriented linear programming of CRS model is shown below: Through the CRS model, DMU’s technical efficiency can be calculated, while λ is the weight, are slack and surplus, respectively, is a non-Archimedean figure, x is the input (there are m input) and y is the output (there are s output), ,≧0. Banker et al. (1984) proposed the VRS model to calculate pure technical efficiency to separate it from the technical efficiency and scale efficiency. The input oriented linear programming is as follow: is a non-Archimedean figure, , Through VRS model, DMU’s pure technical efficiency can be calculated, weight λ and . Therefore,, is the scale efficiency. Banker (1984) proposed the most productive scale size (MPSS) to examine the production scale of inefficient unit. Banker & Thrall (1992) proved with a theorem that, when the sum of weights (λ) of a certain DMU’s reference set equals 1, that is when = 1, indicating that the input of one unit of production factor can produce one unit of output, the returns to scale remains constant. When < 1, it indicates that the DMU is in the situation of increasing returns to scale, meaning the input of one extra unit of production factor can produce more than one unit of output. Therefore, in order to promote an organization’s operational efficiency, the facility scale should be expanded to increase more input so as to gain more output. On the opposite side, if > 1, it indicates the situation of decreasing returns to scale, meaning the input of one unit of production factor will produce less than one unit of output. Therefore, input should be cut down and facility scale should be adjusted to reach the level of the most productive scale size. According to Cooper et al. (2000), the improvement of inefficient DMU can be acquired by the following formula: and refers to improvement target of input, optimized efficiency, input observation and slack of inefficient; , and are improvement object of output, output observation and slack of inefficient. According to Fare, Grosskopf and Lovell(1994),the input-oriented Malmquist productivity change index can be written as:
Current Shifts in Business Training: Evidence from Romania
Dr. Cosmin Joldes, Academy of Economic Studies, Bucharest, Romania
Dr. Alexandra Horobet, Academy of Economic Studies, Bucharest, Romania
Managers are becoming more aware of the value that investment in human resources, as opposed to expenses in intangible assets can produce at the company level and that human resource activities can focus on key business concerns, and in turn drive greater growth and eventually higher market value. In this framework, the core-competency perspective focuses attention on the importance of knowledge creation and learning processes for building and maintaining competitive advantage in a world defined by globalization, demographic change, and the rise of the knowledge worker. Our paper explores the major shifts that are occurring in one critical activity related to human resources, i.e. business training, and discusses recent evidence found in the field in the Romanian market, one of the most active and fast developing countries in Central and Eastern Europe. The primary activities in any company, such as production, operations, sales, and service, are seen as directly connected to the value creation process linked to company’s products or services being offered to customers. Other activities like human resources, IT, and administration have been traditionally considered support activities and considered only as marginally contributing to the effectiveness or efficiency of the primary activities. Consequently, these other activities were seen as only indirectly adding value to the company’s products or services. Even today, these support activities, including human resources, are considered “cost centers” and not investments due to the manner in which most executives perceive their benefits and integrate the activities into the organizational structure of the company. An acknowledgement and understanding of the importance of identifying the sustainable sources of competitive advantage needed for the creation of value for stakeholders in a highly globalized and knowledge- oriented economy indeed generated a new approach to firm valuation, one pioneered by Rappaport (1986, 1987, 1992). This new approach offers a framework for linking management decisions and strategies to value-creation and focuses the executive attention on how to plan and manage firm activities to increase value for shareholders and at the same time benefit other stakeholders. Under these circumstances, as managers begin to be more aware of the value that investment in intangible assets, such as human resources, as opposed to expenses can produce at the company level, human resource activities can focus on key business concerns, which in turn will drive growth and eventually market value. Our paper explores the shifts that are occurring in one critical activity related to human resources, i.e. business training, and discusses recent evidence found in the field in the Romanian market, one of the most active and fast developing countries in Central and Eastern Europe. First, we analyze the changing role of human resource management in a knowledge-intensive economy; secondly, we present and comment on the results of recent surveys that offer insight into the market for business training in Romania. The last section concludes and presents ideas for future research. Today, most managers recognize the strategic implications of the knowledge-based economy and understand that skilled and motivated people are critical for the success of any firm that wishes to remain competitive in the new economy currently emerging. In the late 1980s, the search for more dynamic and sustainable advantage led many managers to supplement their analysis of external competition with an internal competency assessment. Pfeffer (1994) describes how changing market conditions reduced the importance of traditional sources of competitive advantage, such as patents, economies of scale, access to capital, and market regulations. Although this change does not mean that such assets are not valuable anymore, it is now evident that they are not able to offer to a company needed differentiation in a global economy that is being driven by innovation, speed, adaptability, and low costs. This realization came with the understanding that resources and competencies will be more and more difficult to imitate, so, in that framework, the core-competency perspective needed to focus its attention on the importance of knowledge creation and learning processes for both building and maintaining competitive advantage. In such an economy, core competencies and capabilities of employees that helped develop new products and provide world class customer service, and also implement organizational strategy become more influential (Becker, Huselid, Pickus, Spratt, 1997). Regrettably, at the time, companies did acknowledge that their employees, no matter their level within the hierarchy, were not prepared for the new knowledge-intensive tasks. By definition, competency-based strategies are dependent on people, since scarce knowledge and expertise are the factors that drive the development of new products and personal relationships with customers are central to a flexible market response to firm actions. Individuals started to be seen as a key strategic resource, and business strategy was increasingly directed toward a human resource approach. The implications for top management were profound. First, human resources issues should be moved higher in the company’s hierarchy and on the agenda of company strategic priorities. Secondly, and even more significant, traditional strategic planning processes would have to suffer a transformation that includes financially calibrated performance measurement and reward systems that would recognize the strategic importance of human resources, apart from company financial resources. As more and more companies understood decisive importance of human resources the so-named “War for Talent” began. This concept was pioneered by McKinsey researchers, who in 1997 conducted a yearlong survey, entitled, “The War for Talent” and then published updated research in 2001 (see Michaels, Handfield-Jones and Axelrod, 2001). Additionally, Bartlett and Ghoshal (2002) made the case for the evolving role of human resources and saw human resource professionals as key players in the design, development, and delivery of company strategy (see Table 1).
Applying Quality Function Deployment in the Manufacturing Industry: A Review & Case Study in Production
Dr. Zeynep Ocak, Yeditepe University, Istanbul, Turkey
In the manufacturing industry, quality function deployment (QFD) provides a comprehensive, systematic approach to ensure customer requirements and expectations are met via applying improvements to design, production and management phases. In this study QFD was applied to a leading Medium Density Fibreboard (MDF) manufacturing company in Turkey. The ‘case study’ company and its two competitors were compared in terms of customer requirements and product quality. The results of this study provided specific improvements that are necessary to be performed in the focus company. Successful companies in today’s dynamic global economy are those that are able to efficiently design, develop, and manufacture products that will be preferred by customers over those offered by competitors. At the center of this idea is a need to deliver product designs that meet these customer needs while making the designs manufacturable at a competitive cost. To this extend, QFD has been recognized to transform consumers' demands into ``quality characteristics'' and developing a design quality for the finished product by systematically deploying the relationships between the demands and the characteristics (Akao, 1990). QFD is also viewed as a strategic planning and communication tool for linking quality that is defined by customer’s voice to appropriate quality and cost factors or attributes at all levels of the design and production process. QFD is a concept introduced by Akao (1990) in Japan in 1966. It was first put into use at Mitsubishi's Kobe shipyard site in 1972. Dr Yoji Akao defined the concept as follows: ‘‘QFD provides specific methods for guaranteeing quality at each stage of the product development process, starting with design. In other words, it is a method for introducing quality right from design stage to satisfy the customer and to transform customer requirements into design objectives and key points that will be required to ensure quality at production stage’’ (Akao,1990). Later in 1983, QFD was introduced to the USA and it has since spread quickly to many other countries. The basic function and the strongest help of QFD for a company is to use a formalized method to combine the creativity of the company's design and manufacturing capability and transfer them into customer needs. The technique of QFD to deploy customer needs is to use a series of matrices. These matrices help to correlate and visually lead this deployment through the design and manufacturing process. The "WHATS" (demanded characteristics) are correlated in the matrix with the proposed design parameters, the “HOWS”. The strength of the correlation between the “HOWS” and the “WHATS” indicates the efficiency of how well the product and process features are driven by the demands of the customer. When used correctly, QFD is viewed as an effective way to get the voice of the customer embedded into the new product and process design. Because of its numerous benefits, QFD has been successfully used in various fields besides manufacturing, e.g. education, policy management, software development, and the tourist industry. However, the focus of this study will be deployment of QFD in manufacturing industry. The focus company is a leading Medium Density Fibreboard (MDF) manufacturing company in Turkey. Due to confidentiality reasons the focus company studied is identified as company A and its competitors as Company B and C. Four customers of these companies were interviewed to collect the Voice of Customer (VOC); these customers are identified as Customer A, B, C and D throughout this paper. The QFD process is a sequence of activities for processing customer values so that these values can directly shape the design and production of the product/service. The fundamental steps of this process are: (1) Identify the customer;. (2) Identify what the customer wants; and (3) How to fulfill what the customers want. In order to identify their customers, organizations must objectively determine the group or groups that best describe their current and/or desired customer base. After the customer base has been identified, the demands of the customer are determined. These demands are commonly referred to as the “WHATS”. Once the WHATS are established, the QFD team then determines the requirements that would satisfy the WHATS. These requirements are commonly referred as the “HOWS”. Whereas the WHATS are expressed in customer terms, the HOWS are expressed in technical, corporate terms. During the third phase, the QFD team incorporates all this information on a graphical display known as the “house of quality”. This house provides a framework that guides the team through the QFD process. It is a matrix that identifies the WHATS, the HOWS, the relationships between them, and criteria for deciding which of the HOWS will provide the greatest customer satisfaction. (Glen, P; Motwani, J; Kumar,K and Cheng, C.H., 1996) The main QFD components are the deployment tables, the matrices and the conceptual model. A quality deployment table is a chart that represents levels of deployment of a given subject. Information is grouped by affinity (similarity) and ordered in levels from the left-hand side of the table towards the right-hand side. More detail is obtained from level one to levels two, three, and so on. (Miguel, P., 2005) QFD belongs to the sphere of quality management methods, offering us a linear and structured guideline for converting the customer’s needs into specifications for, and characteristics of new products and services. The method involves developing four matrixes, or ‘houses’, that we enter by degrees as a project for a given product or production process is developed on increasingly specific levels (Akao, 1990). These matrices relate the variables associated with one design phase to other variables associated with the subsequent design phase. This set of matrices used in a given development is called the QFD conceptual model, which represents the whole development. It may consider a number of deployments, such as quality, technology, costs, and reliability deployments (Akao, 1998). In the present article, our attention focuses on the Planning Matrix, or (HOQ) (Hauser and Clausing, 1988) (Fig. 1). This is a matrix that provides a map for the design process, as a construct for understanding customer requirements (demanded quality) and establishing priorities of design requirements (quality characteristics) to satisfy them. In terms of QFD applications, a comprehensive literature review is provided by Chan and Wu (2002). Their work is based on a reference bank of about 650 publications. After a brief introduction to the historical development of QFD, especially in Japan and the USA, a categorical analysis is presented. In their paper, functional fields, applied industries and methodological development are highlighted in detail.
Method for Accelerating Transfer of Innovation and Technology to Technology based SMEs in South African
Duncan H. Tungande, Chief Financial Officer-Tshumisano Trust, Innovation Hub, Pretoria, South Africa
According to the South Africa’s National research and development strategy (August 2002), the innovation pillar involves the establishment and funding of a range of technology instruments that are critical to promote economic and social development.These include the two key technology platforms of the modern age,namely biotechnology and information technology. The Government has addressed the promotion of technology development transfer, and innovation, through different approaches, which can be typified as supply-side and demand-side. Naturally, these approaches are not mutually exclusive, but the emphasis of the measures recommended is changing from supply-side to demand. In the supply-side approach, R&D activities carried out by state institutions are intended to create new technologies which may contribute for the mission of the state, to foster innovation and the productivity of the private sector. The emphasis is on development of technologies. The demand-side approach emphasises co-operation by the state to improve the availability, adoption and use of technologies by SMEs, (Small Medium Enterprises) and activities to encourage investment in technology, education/training and information infrastructure. This Paper discusses technology transfer (TT) in university-SME partnerships as a means of promoting innovation. The Department of Science and technology established Tshumisano Trust to cultivate an entrepreneurial spirit within the populace of South Africa; mainly in providing innovative solutions to Technological based SME’s to address pressing Socio-economic goals of the Government. Tshumisano which means cooperation or partnership is the implementation agency for the TSP (Technology Stations Programme). The Trust provides technical and financial support to Technology Stations, which are based at Universities of Technologies/Technikons. The Technology Stations in turn offer technical support to existing SMEs in terms of technology solutions, services and training. The agency through the TSP, experienced significant growth in enriching the Research and Development (R&D) of UoT’s (University of Technology) and in satisfying the needs of SMEs, which are the over-aching goals of the Programme. The Technology Stations are world class service providers of Technology services to SMEs. The services to SME’s are provided by technical experts with the requisite skills and expertise. The experts range from Professors, Lectures, Postgraduates and External Consultants, thus enriching the R & D of the host institution as well as solving technology based problems experienced by SMEs. Furthermore, some of the SMME’s products attained SABS (South African Bureau of Standards) ISO (International organisation for Standardisation) certification among other accreditation processes. In identifying specific needs of SMEs in terms of product and process improvement, the Trust has increased its stations from three in 2001 to Fifteen in 2007 to accommodate wide range of needs in diversified economic sectors. Concepts frequently used, like technology transfer and technology diffusion and public policies concerning innovation and related matters are discussed. Issues regarding to technology transfer are mentioned, with an emphasis on the Department of Science and Technology’s (DST) instruments aimed at technology transfer, such as the Technology stations Programme (TSP) run by Tshumisano Trust, and the Technology Innovation Agency (TIA) Still to be formed. Partnerships in the Technology transfer value chain are then discussed, and the typology of technological strategic alliances is presented. The new Government faced challenges in basic development. Having focused on the future for so long in the struggle, we now had to deal with the urgent service delivery needs of the present. Not surprisingly, the new funding scenarios required re-direction of the remaining technology competencies towards instruments emphasizing quality of life and economic competitiveness. However the emphasis was reprioritising rather than the funding of new instruments within this policy space the white paper on science and technology approved by cabinet in 1996, established a policy framework for science and technology in South Africa this is according to the South Africa’s national Research and Development Strategy (2002). The Government’s vision is to have a highly diversified downstream chemicals industry by 2014, and to build some world leading sectors in downstream chemicals as well amongst other things. In the global context science and technology is being submitted to careful scrutiny by the South African government, the private sector, and by society at large. We, in South Africa, must be able to answer the critical question: How do we, through science and technology, prepare our nation and its people for the 21st century? The White Paper is based on the understanding that science and technology do not exit for themselves, nor do they measure success in their own terms. It is the contribution of science and technology to the national system of innovation that makes the difference. Innovation is crucial to the achievement of competitiveness, employment creation, and the enhancement of quality of life, the achievement of environmental sustainability, and the promotion of an information society. Increasingly modern economists identify innovation and technological change as important as capital and labour in the achievement of economic growth. Innovation is that sublime process by which human creativity and ingenuity find expression in new products and services that add value to human existence. Therefore we need change. Firstly, the government must be much clearer about its priorities and how these are translated into effective programmes that harness science and technology in the service of innovation. Resource allocation for science and technology. Indeed, this Paper intends to signal a fundamental shift from the input measures of the past to a performance-driven culture where outputs, contribution and impact are regularly assessed in order to align the instruments of government to be effective in addressing national priorities. The review will provide the basis for clarification of institutional mandates, and new funding instruments. The most important of these is the creation of the Technology Innovation Agency (TIA) which will be created to promote large-scale projects involving participants from throughout the national system of innovation. This Instrument will, for the first time, create the opportunity for significant linkages and consortia to be mobilised to address specific national priorities against much shorter time horizons than was previously possible. Consortia, co-operation and clear objectives will increase the productivity of our science and technology agencies by ensuring that Technologies are being commercialised by SMEs. The government must play an enabling role to stimulate the private sector to become, and remain, the major investor in science and technology. This Paper will explore the creation of linkages to enhance the transfer of knowledge and technology between Higher Education Institutions (Technology Based) and other government institutions and the private sector. In addition, attention is given to the harmonisation of programmes across a broader range of government departments without compromising the integrity of the line functions In order to achieve our objectives South Africa will be well served with a significantly increased public awareness of science and technology. Our future is dependant on Science and Technological innovations as we are facing challenges of shortage of power(Electricity) and global warming issues, diseases, poverty and unemployment just to mention a few. Significant advance planning is critically needed. We need to create a culture where people are not the passive recipients of the benefits of science and technology, but where people feel empowered to themselves become agents of technological change and innovation in their own right. His paper will examine in detail effective strategies as a fundamental shift towards innovation as a new source of wealth for our country. Science and, more especially, technology are critical ingredients in the system of innovation. Government, industry, labour, the academic community, scientists, engineers, and technologists and the public at large need to forge a rich set of linkages, consortia and relationships to build a truly powerful engine for growth and provide solutions to our problems.
Technology-Related Privacy Concerns: A Critical Assessment
Cliona McParland, Dublin City University, Ireland
Dr. Regina Connolly, Dublin City University, Ireland
The exponential adoption of the Internet for transaction and interaction purposes continues unabated. Despite the obvious empowering benefits of the Internet however, consumers are becoming increasing aware of the ways in which technology can be used to collate information regarding them and the ability of online vendors to use this information without their express permission. Vendors facing intense competition in the marketplace are under increasing pressure to gain a more sophisticated understanding of their consumers and thus view the collection of consumers’ personal and interaction information as essential to achieving that understanding. Awareness of this fact has accentuated consumers’ privacy concerns and in some cases impacted interaction intentions and behaviour. Similarly, in the work environment, employees’ awareness that communication-monitoring technologies are being used to monitor their email and Internet interactions has increased. Despite the importance of this issue, research on technology-related privacy concerns remains in an embryonic stage. Moreover, the literature indicates that much confusion surrounds the construct and in many studies the construct is neither clearly defined nor operationalised. The aim of this paper is therefore to reduce that confusion by providing a brief review of the literature while outlining potential research avenues worthy of future research. This paper provides a refined and holistic understanding of the construct and consequently makes a valuable contribution not only to information systems research but also to practitioners in their efforts to better understand the factors that predict and inhibit technology-related privacy concerns. Privacy has always been a litigious issue as individuals strive to protect their sensitive information from mis-use by others. However, the advent of the Internet combined with the increasing proliferation of technologies in both the marketplace and workplace have been matched by a heighten awareness amongst individuals that threats to their privacy exist and must therefore be addressed. Despite the empowering benefits of the Web, consumers are becoming increasingly aware that technology can also be used by online vendors to collect potentially sensitive information regarding them and that this information can be used without their express consent. For example, online transactions require customers’ to disclose considerably more personal and financial information than they would provide in offline transactions (Miyazaki and Fernandez, 2001). Marketers can use the trail of information that results from such Internet transactions - including information on the customer’s searches, comparisons, product and brand preferences, purchase and post-purchase information - to compose very precise customer profiles in their efforts to continuously learn about changing consumer needs. With this information, vendors then have the ability to provide individuals with specifically customised information thus offering them a personalised shopping experience. From a vendor perspective the consequence is increased customer satisfaction that they hope will translate into increased retention and ultimately increased profitability within the marketplace. However, from a consumer perspective, the price of this personalised shopping experience may outweigh any customisation benefits, particularly when vendors have been know to sell information on consumers to third parties without the permission of the consumers concerned. In the social science literature the importance of individuals’ privacy concerns is widely acknowledged (e.g Konvitz, 1966; Powers, 1996; Froomkin, 2000; Rule, 2004; Cassidy and Chae, 2006) and it is recognised as a dynamic issue that has the potential to impact attitudes, perceptions, and even the environment and future technology developments (Crompton, 2001). Within the information systems field, while there is an growing awareness of the importance of technology-related privacy concerns, empirical research on the construct remains at an embryonic stage and the limited number of studies on the construct that exist tend to be limited in size and nature (Gefen and Straub, 2000; Cockcroft and Heales, 2005). Compounding the problem is the fact that some of these studies are beset by conflicting conceptualisations of the construct, as well as a lack of agreement regarding the factors that predict the perceptions, attitudes and behaviours of the consumers themselves. Consequently, it is difficult for privacy researchers within the information systems discipline to compare and contrast the results of previous studies in their efforts to progress understanding of the construct. Moreover, as far as it is possible to ascertain, there have been no studies on technology-related privacy concerns within an organisational context to date. The aim of this study therefore is to provide both a concise and consolidated review of the technology-related privacy literature. The literature outlining perceptions, attitudes and behaviours of consumers’ in relation to their technology-related privacy concerns will be reviewed and a number of gaps in relation to technology-related privacy concerns will be outlined. Privacy is a complex construct that has received the attention of researchers from a broad spectrum of disciplines including ethics (Platt, 1995), economics (Rust et al., 2002), marketing (Graeff and Harmon, 2002), management (Robey, 1979) as well as from the legal discipline even as far back as 1860 (Warren and Brandeis). However, despite this interest, the construct remains beset by conceptual and operational confusion. For example, Tavani (1999) remarks that privacy is neither clearly understood nor clearly defined while Introna (1996) comments that for every definition of privacy, it is also possible to find a counterexample in the literature. As a result, many researchers choose to define privacy specific to the focus of their specific study or the lens of their discipline in an attempt to evade this problem (Smith, 2001) and as a consequence the conceptual confusion that surrounds the construct remains undiminished. Unsurprisingly, these differing conceptualisations have manifested in similarly differing views regarding how the construct should be examined and measured.
Culture and Internal Competition in Romanian Hospitality Industry: Dimensions and Risks
Dr. Claudia-Elena Ţuclea, Academy of Economic Studies, Bucharest, Romania
Dr. Olimpia State, Academy of Economic Studies, Bucharest, Romania
Dr. Gabriela Tigu, Academy of Economic Studies, Bucharest, Romania
This paper presents the conclusions of a quantitative research aiming to identify a relationship between the coordinates of the organizational culture in Romania (investigated with Hofstede’s model) and the individual and organizational performance. The research tries to validate the hypothesis according to which the Romanian cultural model is still in a significant opposition with the competitive behavior leading to increasing individual and organizational performance. Although the idea of reward is attractive, at least at the theoretical level, its boomerang effect is encountered most of the times: the Romanians want to have more money, however this fact determines them to work harder only in the short term. Subsequently, the role of the two variables (the reward and quality of work) reverses: the work becomes a consequence of the reward. This fact seems to represent the expression of a collective frustration (still remaining from the communist regime), which generates an anxious problem of attitude: the extrinsic motivation (salary & wages) erodes the intrinsic motivation. The internal competition within the Romanian organizations does not necessarily lead to beneficial effects. The lack of collective performance and the negative effects at the personal level (stress, anxiety, even depression) are only a few of the undesirable consequences of applying Western managerial practices in an environment characterized by a collectivist culture. The issues presented in this paper have resulted from the research accomplished by the collective of authors, with reference to the identification of the cultural values from the Romanian organizations belonging to the hospitality industry, and the impact of such values upon the Romanian employees’ behavior and labor performance. Although 17 years have elapsed since the change of the political regime, the changes in the collective mentality are still lagging behind. The Romanians themselves explain the economic failures in terms of “mentality”. The phenomenon analyzed is far more complex because the Romanian people generally exhibit a completely negative self-perception (as a nation) and a very low level of individual and collective self-respect (Heintz, 2005). Many Romanian historians made a critical analysis of the Romanian personality features based on more than 2000 years of Romanian history – and most of their conclusions revolve around the idea that “the Romanians are experts in constructing on approximate, non-dogmatic bases”. This feature was not understood as a defect, but as a mechanism of surviving the invasions during the great people migrations of the Middle Ages and subsequent foreign domination periods. The “superficiality” has helped the Romanians a lot during the communist regime, as well as during the Phanariot reigns. This fact is interpreted by some historians as a sort of the “Romanians are satisfied with the appearances”, with “let’s pretend that …”, while other historians interpret it as “adaptability”. In any case, almost all Romanians agree upon the idea that responsible for the lack of performance, particularly in economy, and in society in general, is the “Romanian mentality”, understood as a sum of negative attributes, especially when compared with the image that Romanians project about the “West”. The signals revealing the special respect the Romanians hold for the “Western world” are: the imitation of Western appearances, disregard of what is perceived as typical “Eastern”, the emigration to the West, while, at the national level, a strong desire to present a favorable image in the West (for example, the publication of a book entitled The Eternal and Fascinating Romania). The concept of “organizational culture” is very complex, partially invisible and, therefore, difficult to research. The dimensions of the well-known Hofstede model are: (1) power distance index- PDI (designates the way a society manages to deal with the idea of inequality among people ); (2) individualism versus collectivism – IDV (measures the intensity of the relationships an individual establishes with the others); (3) masculinity versus femininity- MAS (treats the division of the roles in society between the genders; in male societies, the male traditional social values permeate the entire society, including the mentality of the females; among these values, one can enumerate the importance attributed to recognition, professional performance, material accumulation, money making skills; in the female societies, the dominant values – for both males and females – are the ones traditionally associated with feminine roles: discretion, priority of the inter-human relationships over the material ones, preoccupation for the quality of life and protection of the environment, compassion for the others.); (4) uncertainty avoidance – UAI (defines the way a society manages to deal with the idea that time runs in a single direction and that people are forced to live in uncertainty, because the future is always unpredictable; some societies try to avoid these uncertainties by various means such as technology, law and religion). In Romania, the study of the organizational cultures is still in the pioneering phase: the first study was completed in 1997. The results were typical for the transitional state of the Romanian society, which left its fingerprint on people’s mentalities, values and attitudes. This first study succeeded in drawing the profile of a culture based on real, concrete Romanian organizations. The last research in Romania was performed by Interact together with the Gallup Organization from Romania, in January 2005, using the Value Survey Module instrument, developed by the Institute of Research for Intercultural Communication (IRIC) founded by Geert Hofstede. This research demonstrated that Romania shares similar values with the other Balkan countries, namely large distance from authority, collectivism (low level of individualism), femininity and a high degree of uncertainty avoidance. The study was performed on a 1076 people sample, representative for the Romanian population. The data in this research can be quantified on a scale from 0 to 100, as following: between 0 and 40 – low level; between 40 and 60 – medium level; above 60 – high level. Geert Hofstede has estimated that Romania exhibits very high levels for the indices representing the distance from power (90) and uncertainty avoidance (90), low level for the index representing the individualism, i.e. a high level of collectivism (30), and a moderate level of masculinity. The goal of our research is to recalculate the Hofstede’s index for the Romanian space, in order to help the general understanding of the cultural values and employee attitudes regarding the ideas of internal competition and relative performance evaluation.
The Problem and Solution of Export and Import Documents Presented Against Letter of Credit for Payment
Dr. Sut Sakchutchawan, Waynesburg University, Waynesburg, PA
Despite growing discrepancies in presentation of export and import documents for payment during the past thirty years, no research has been done on where, why, and how the discrepancies in the documents occurred. The objective of this research is to call attention to the neglect and to propose a resolution to this problem. Beginning with a description of the problem, the research describes this phenomenon as a significant worldwide issue for sellers who are too often refused payments when banks discover discrepancies on export and import documents. The findings reveal that the problems of discrepancies caused by the excessive terms and conditions of the letter of credit and the ambiguous context on each article of the Uniform Customs and Practice. To solve these problems, this research recommends that the context of letter of credit and the context of each article of Uniform Customs and Practice must be clear and concise. Secondly, a guideline with a practical example must be provided accordingly. Lastly, the personnel involved in documentary preparation must be certified to ensure they have enough skills to handle the documents properly. As importing and exporting businesses continue to grow as a part of global production, more attention and emphasis have been placed on understanding the key aspects of international trade transactions. More exporters are looking for foreign markets to sell their products. The great benefit of exporting is that large revenue and profit opportunities are to be found in foreign markets. Many of the world’s largest companies derive over half of their sales from outside their home countries. More importers are also looking for sources of supply to buy products. Companies and distributors seek out products, services, and components produced in foreign countries. They do this to reduce their cost. International trade or business comprises a large and growing portion of the world’s total business. The growing importance of international trade and the pervasiveness of problems related to the process of importing and exporting requires an ability to manage international trade transactions effectively. Exporters often face voluminous paperwork, complex formalities, and many potential delays and errors. Inexperienced exporters have a number of ways to gain information about foreign market opportunities and avoid common pitfalls that tend to discourage and frustrate novice exporters (Hill, 2004, p. 537). Many scholars in international business are interested in researching marketing of exporting, location of exporting, cultural values, international trade and environments, etc. They seem to overlook the problems of the exporter in presentation of documents for payment. This problem is growing and seems to be unstoppable in import and export business. For decades, many international trading firms complained that the banks refused to pay them due to discrepancies on the export documents without just cause. This can be an extreme source of frustration to both the importer and the exporter worldwide. It is very difficult to provide a persuasive explanation for that pattern. Therefore, it would be beneficial to society at large to study and research the problems of presentation of discrepant import and export documents for payment and financing. It would be very interesting to find out what caused those discrepancies. It is clear that international trading firms, banks, and financial institutions worldwide would want to know how to resolve the problem of discrepancies on import and export documents. During the past decade, several discrepancies on import and export documents have developed. The presentation of import and export documents against letters of credit is incorrect, resulting in a lot of problems and unnecessary delays in collecting payment and financing. The problems of discrepant import and export documents in cases still occur consistently while the clamber to find solutions to these problems continues. Hypotheses: Since this research intends to employ descriptive research to find solutions to the problems of discrepancies on export and import documents. Therefore, the descriptive hypotheses are: The more excessive the requirements in the letters of credit, the more likely it is to find discrepancies: Hypothesis 2: Ho: The more ambiguous the context of the Uniform Customs and Practice, the more likely it is to find discrepancies. In this research, if the null hypothesis is rejected, the alternative cannot be rejected. Therefore, the alternatives for the above hypotheses are as follows: H1: The more excessive the requirements in the letters of credit, the less likely it is to find discrepancies: H2: The more ambiguous the context of the Uniform Customs and Practice, the less likely it is to find discrepancies. The condition of this research is the null hypothesis is presumed true until a preponderance of the evidence indicates that it is false. Decision Rule for Hypothesis 1: The null hypothesis (first hypothesis) will be rejected if 45 % or less of the data shows discrepancies of documents do not comply with the excessive terms and conditions of letter of credit. Decision Rule for Hypothesis 2: The null hypothesis (second hypothesis) will be rejected if 25 % or less of the data shows discrepancies of documents do not comply with the contexts of Uniform Customs and Practice. As business became more global and export and import business played a more central role in international trade practices, it was implausible to make business agreements by handshake. Thus, letters of credit became a frequent instrument. According to Nelson, letters of credit (LC) were quite literally a letter addressed by the buyer's bank to the seller's bank stating that they could vouch for their good customer, the buyer, and that they would pay the seller in case of the buyer's default (Nelson, 2000, p. 91). Weiss stated that letters of credit are an extremely flexible method of payment. They were used then, as they are now for any transaction wherein one or more parties to the transaction require the guarantee of payment by a reputable bank (Weiss, 2002, p. 103). Letters of credit are the instrument of both export and import business in that one party may request a letter of credit for a transaction involving goods or services when the other party is on the other side of the world. Sometimes called “trade credit” (Tuller, 1994, p. 148), the key facilitation of the export and import transaction is that the seller must ship the goods and present the documents to the bank as required by the rules and regulations of letters of credit, guaranteeing that the seller will get paid (Axtell, 1994, p. 104)
An Analysis of Exchange Rate and Export Growth in India
Dr. Sadananda Prusty, Institute of Management Technology, Raj Nagar, Ghaziabad, U.P., India
Empirical evidence drawn from Fang et al. (2006) shows that depreciation encourages exports in eight Asian countries, except Singapore. Fang et al. (2006) uses a dynamic conditional correlation bivariate GARCH-M model for the monthly time-series data on bilateral exports from 1979 to 2003 to arrive at the above findings. During post-reforms, India’s exports increase at a faster rate as compared to its GDP growth. Many factors appear to have contributed to the export growth in India including depreciation of rupee. Commerce and industry minister Nath (2007) blames the weakening dollar for the fall in growth of industrial output and exports in India. Mahambare et al. (2007) points out that exports growth in India has slowed down recently as compared to the previous years mainly due to the appreciation of rupee against dollar. These are the newspaper articles and statements. This research explores the post-reform long-run relationship between exchange rate and export growth in India by using time series tools (i.e., unit root, causality and cointegration tests). Empirical results suggest that there exists bidirectional causality between exports growth and exchange rate growth. Further, Johansen’s (1995) cointegration test result reveals that there exists a positive and significant long-run relationship between rupee depreciation and exports growth in India, thus supports the trade theory and findings of Fang et al. (2006). After a severe balance of payments (BOPs) crisis in 1991, India implemented a comprehensive package of economic reforms. Rupee was devalued in terms of US dollar by more than 30% in 1991. This was followed by a managed float regime. Between 1981–82 and 2001–02, the rupee depreciated at an average annual rate of about 8% (Mallick and Marques, 2006). Trade has been extensively liberalized. The export taxes and export promotion marketing boards that prevented free competition among exporting firms have been largely removed. India’s openness index, defined as the sum of exports and imports with respect to GDP, has gone up from 16% in 1985–86 to 37% in 2002–03 (Mattoo and Stern, 2003). These important elements of the new export promotion strategy have allowed Indian exporters access to the global market place. Coupled with the devaluation of the rupee, the reforms taking place since 1991 have reduced the anti-export bias of Indian industry and India has become an increasingly important player in world trade (Chopra et al., 1995). India's exports have grown much faster than Gross Domestic Product (GDP) over the past few decades. For example, its exports have grown by 17.71%, 27.58% and 24.33% per annum while growths in GDP are 5.57%, 4.37% and 9.40% in 1990-91, 2000-01 and 2006-07 respectively. Several factors appear to have contributed to this phenomenon including exchange rate of rupee vs. US dollar which has been depreciating consistently especially from 1993-94 (except in the year 1995-96, 2004-05 and 2005-06). The average exchange rate was Rs.30.60/USD in 1992-93 and depreciated to Rs.45.20/USD in 2006-07. The simultaneous trade liberalisation and change of exchange rate regime included in the 1991 reforms in India make this research to investigate the exchange rate’s long-run relationship with export growth. Moreover, India may also serve as an example to other developing countries that are trying to internationalise their economies and implement liberalising reforms. Trade theory argues that depreciation lowers the foreign currency price of exports and thus, increases the quantity of exports and export revenue in domestic currency. However, depreciation will not lead to increase in exports, if export production incorporates high import content and cause an increase in domestic cost or price of exports. During appreciation, exporters might price to market and lower their domestic currency price to maintain export market share. Empirical evidence exhibits ambiguity as to the effects of the exchange rate on exports and export revenue. Some empirical studies find that devaluation increases exports for developed countries with fixed exchange rates (Junz and Rhomberg, 1973; Wilson and Takacs, 1979). Similar empirical results also obtained with flexible exchange rates (Bahmani-Oskooee and Kara, 2003; Fang et al., 2006). In contrast, others find that appreciation does not reduce exports in some Asian countries (Athukorala, 1991; Athukorala and Menon, 1994; Abeysinghe and Yeok, 1998; and Wilson and Tat, 2001). Bahmani-Oskooee and Kara (2003) and Wilson and Tat (2001) use cointegration test to examine the effect of depreciation on exports and the trade balance. However, in the context of emerging market economies such as India, there is little, if any, evidence examining the long-run relationship between exchange rate and export growth by using monthly data and cointegration test. This paper attempts to fill the existing gap and examine the following questions. Question 1 (Q1): Is there any causal link between exchange rate and export growth in India? Question 2 (Q2): What is the correlation between exchange rate and export growth in India? Question 3 (Q3): What is the nature of short-run relationship between exchange rate and export growth in India? Question 4 (Q4): Is there any long-run relationship between exchange rate and export growth in India? This research uses monthly data for a period of 182 months (March 1992 – April 2007) pertaining to variables such as the value of exports and exchange rate (Rs/USD). In order to examine the relationship between exchange rate and exports growth in India, the empirical analysis has been carried out in four stages. The study conducts Phillips-Perron unit root test to verify if the variables have unit roots in the first-stage of its analysis. This is essential to ascertain that the concerned series are non-stationary in order to use cointegration analysis. In order to know the causal relationship between the above variables, the pair-wise Granger Causality Test is employed in the second-stage of analysis. The correlation matrix and the ordinary least square method are used in the third-stage of analysis in order to know the nature of short-run relationship between the variables. Johansen’s (1995) cointegration test is used in the final stage of analysis in order to know the nature and degree of long-run relationship between the variables during the period of study. Time series theories starts by considering the generating mechanism, which should be able to generate all the statistical properties of the series, or at least the conditional mean, variance and temporal autocorrelations, i.e. linear properties of the series conditional upon past data. A series is stationary, called I(0), denoting “integrated of zero”, when the linear properties exist and are time-invariant. Some series needs to be differenced once to achieve these properties and these will be called integrated of order one, denoted I(1).
Natalie Powers, Rollins College, Winter Park, FL
Dr. Marc Fetscherin, Rollins College, Winter Park, FL
The majority of the world’s televisions are produced in emerging market countries, with China and Malaysia being two of the largest producers. This has important implications for brands emanating from these countries and entering the United States market, as country-of-origin effects are known to affect consumer perceptions. This paper uses frameworks developed by Martin and Eroglu (1997) and Aaker (1993) to assess U.S. consumers’ country image and brand perceptions of televisions manufactured in China and Malaysia. This is explored by exposing two groups of U.S. consumers to information and photographs about LCD televisions manufactured either in China or Malaysia. Our results yielded significant differences in country image perceptions by U.S. consumers, whereas perceptions of each country’s brands were not perceived differently. Our findings in this explorative study indicate that televisions as a product category are less sensitive to country-of-origin effects. This suggests positive implications for television brands from China and Malaysia surmounting the country-of-origin effect and succeeding in the United States market. Emerging market countries are producing the majority of the world’s televisions, China and Malaysia being two of the global leaders. In China, the market-oriented reforms implemented over the past 20 years have seen the country transform from a largely inefficient arrangement of industries, owned and controlled by the state, to a more open economy of reduced barriers thriving on manufacturing and industry, and emerging as a global superpower. This has made the country increasingly attractive to Foreign Direct Investment (FDI) grossing over USD 72 billion in 2006. With its vast population, the economic reforms of the 1970’s have begun to trickle down to their 1.3 billion people, creating a huge domestic market of consumers. This has driven domestic demand for durable goods which in turn has given birth to several home grown Chinese brands, such as Today China Lion (TCL) and Lenovo among others. The electronics manufacturing industry has been one of China’s strongest sectors, particularly that of television manufacturing. Before China’s period of reform, a single television manufacturing plant supplied the entire country with a meager 3,800 units annually (2007, China Daily). As of 2007, China was the world’s largest manufacturer of television sets with over 90 million units produced and about 40 million units exported (iSuppli, 2007). Chinese companies began as Original Equipment Manufacturers (OEM). This was the case with Changhong Electric who started selling televisions in the United States market under the brand Apex Digital. However, some of them have emerged as global players with their own brands such as Sichuan Changhong Electric, Xiamen Overseas Chinese Electric Co. (XOCECO), and Konka Group. TCL Multimedia Technology acquired the TV manufacturing unit of the French Thompson Group, in 2003 making it the largest producing company. Malaysia, with a population of around 25 million and a small fraction the size of China, has followed a path of development that was spurred by economic reform three decades ago. From 1971 to the late 1990s Malaysia went through an economic transformation, completely restructuring its economy and diversifying itself from dependence on the export of commodities towards manufacturing in the technology industry. The Country’s growth since economic reform has been fueled almost exclusively by exports, which increased by 10% in 2006 to a value of USD 174.61 billion (Department of Statistics Malaysia, 2007). Among their key exports are electronics, which have become the largest portion of total exports, and include electrical machinery and appliances, office machines and automatic data processing (ADP) machines, and telecommunication and sound equipment (TEEAM, 2007). Although Malaysia doesn’t operate on the manufacturing scale and breadth of China, the country has had great performance in manufacturing electrical and electronic (E&E) products, which now make up over 45% of exports, and have grown to USD 7.28 billion as of first half 2007 (TEEAM, 2007). Television production in 2003 was nearly 10 million units, and in 2006 exports of TVs totaled in value of USD 1.4 billion (TEEAM, 2007). This growth in the number of TVs produced is attributable largely to the foreign companies such as Funai Electric (producers of Panasonic), Sony, and Philips that have established production facilities there. The development of these countries as giants in the consumer electronics industry has important implications for their domestic companies trying to sell their own brands in developed markets such as the United States. The per capita television consumption in the United States is one of the highest with 704 per 1,000 people. Japanese television giant Sony has had the majority market share in the United States since 2003, but already in 2006, Malaysia replaced Japan as the third major source of United States electrical and electronic imports after China and Mexico (TEEAM, 2007). This paper provides an explorative study assessing U.S. consumer perceptions and television brands from China and Malaysia and their corresponding country of origin (COO) effect. The following section provides a literature review about country-of-origin effects with a specific focus on electronics and the television industry. The third section of this paper outlines the research framework used, the fourth section presents the research method followed, and an analysis of the results follows in the fifth section. The final section of this paper provides a conclusion and discussion related to the topic studies. A great deal of research has been aimed at informational cues that provide consumers a means of evaluating products (Bilkey & Nes, 1982). The country-of-origin (COO) of a product is one such cue that has grown increasingly important as movement towards globalization furthers the diversity of goods sourced from various countries.
Decision Factors in Global Textile and Apparel Sourcing After Quota Elimination
Dr. Kin Fan Au, The Hong Kong Polytechnic University, Hong Kong
Man Chong Wong, The Hong Kong Polytechnic University, Hong Kong
The country decision factors for global sourcing and the pattern of textile and apparel (T&A) trading are expected to change after export quota elimination in 2005 when liberalized trade is exercised. The analytic hierarchy process (AHP) approach was applied in this study to evaluate the relative importance of the devised global sourcing decision factors in the post-quota era. A total of 15 T&A trading companies were interviewed and a questionnaire survey was conducted. The data were analysed and the results indicated the priority of product quality, costs, time to market and country factors are important in the consideration of global T&A sourcing decision in the post-quota era. Textile and apparel (T&A) manufacturing is one of the leading industries which have actively and extensively exploited the global supply chain. Through global sourcing, foreign T&A retailers and firms can acquire good value products at competitive prices. Taking the global trend, buyers in industrialized economies have increasingly sourced in lower-wage countries in order to overcome domestic supply side constraints (e.g. labour shortage, high wages and land costs) and challenges from the international trading environment (e.g. tariff, quotas’ constraints and currency fluctuations) (Jin, 2004). However, with export quantitative restrictions initiated by the Multi-fibre Arrangement (MFA) since the 1970s, quota availability of an apparel exporting country is a determining factor in the choice of locations for attracting offshore investment and global sourcing. The simple fact is quota entitlement represents the right of access to markets and also the single largest costing item in the overall cost of imported apparel, which usually accounts for 15-20% of the FOB price of the commodity (Christerson & Appelbaum, 1995). Trade statistics for the quota regime revealed that the global sourcing patterns in the T&A industry have significantly related to the quota status of the supplier country (Chan & Au, 2007). For instance, the Caribbean Basin countries and Mexico have significantly increased their apparel exports to the US market over the last two decades (Gereffi & Memedovic, 2003; Su et al., 2005). The spectacular surge in exports observed in these countries was attributed to the preferential trade agreements with the US. Under the arrangement, T&A products from these countries are entitled to quota free when export to the US markets. Similarly, the T&A industry of EU also witnessed a significant shift in her global sourcing patterns. The developed countries in Western Europe delocalised their apparel production to the Central and Eastern European and North African countries (Wong & Au, 2007). Similar to the Caribbean Basin countries and Mexico, most of these countries can enjoy quota exemption for the T&A exports to the EU market. However, with the conclusion of the Agreement on Textiles and Clothing (ATC) on 1st January 2005, quota restrictions were eliminated for members of the World Trade Organization (WTO). Most of the WTO member-countries can export their T&A products to the United States and EU markets liberally and deprived of the previously imposed quantitative restrictions. As a result, foreign T&A retailers, manufacturers and trading firms are no longer required to divide their orders among several supplying countries, but concentrate on those countries where they can operate best (Tait, 2002). This implies that a country’s export quota availability is no longer the primary consideration in its sourcing location decisions (Chan et al., 2008). In return, foreign T&A buyers will shift their purchase to locations where best values for end products are provided. In this case, other competitive factors, such as cost, productivity, flexibility, quality, time to market, reliability, custom procedure and ethical standards, etc. are getting more and more attention in relation to T&A production in the post-quota era. With such a mix of factors, it is difficult in the identification process of country decision factors for apparel global sourcing which involves a set of qualitative and quantitative criteria that demands quantifying the overall priorities of each element and evaluate the trade-offs between different requirements. In this paper, a systematic and structured method is recommended to assist in developing the solution for this complex multicriteria problem. The analytic hierarchy process (AHP) approach was applied to determine the relative weights and priorities of these new global sourcing decision factors. Data were collected by convergent interviewing approach, with in-depth face-to-face interviews with the managers of 10 T&A trading firms. Also, a total of 38 questionnaire surveys were collected and analysed using the AHP method. Recent studies underline the importance of several factors as determinants of global sourcing country decisions in the post-quota era ( Jones, 2003; Kennedy, 2003; Slater, 2003; Tait, 2002; USITC, 2004). In general, these decision factors can be broadly classified into four major areas: (1) Costs. The major attraction for T&A firms to engage in global sourcing is cost advantage (Lee et al., 2004; Palpacuer, 2006). Cost considerations related to the acquiring factors of production balancing against factors affecting revenue. Firms would consider labour, materials and shipping costs as well as tariff levy and the availability of quotas in the selected countries (Abernathy et al., 2006). (2) Product Quality. Recently, many T&A firms strive to compete on quality of products rather than on cost. In Nassimbeni’s study (2006), quality is emphasised as the leading selection criterion in the views of Italian firms. Product quality can be assessed through workmanship (Kennedy, 2003), technological capabilities (Jones, 2003) and value-added services including vertical integration capabilities, reliability and trusted relationships. (3) Time to Market. Since most T&A products are seasonal products, time to market is also regarded as a critical determinant in sourcing decisions (Kennedy, 2003). Moreover, the widespread adoption in the retail sector of ‘lean retailing’ implies that the supply of fashion garments is continuously being adjusted to changing consumer tastes. This requires more frequent re-ordering of garment items in smaller quantities as opposed to the traditional stocking in the store before the season and clearance sales at the end of the season (Mayer, 2004). In relation to this, geographical proximity and transportation to suppliers and markets also influence the global sourcing country decision making process. (4) Country Factors. These can be classified into two main categories: namely the country’s internal factors and external factors with infrastructure (Jones, 2003), ethical issues (Berthiaume, 2006; Pretious & Love, 2006) being regarded as internal factors; political & economic stability (Jones, 2003), import quota/tariff of world major markets (Abernathy et al., 2006) and social & culture differences (MacCarthy & Atthirawong, 2003) are treated as country external factors.
Forecasting Apparel Exports of Selected East Asian Countries after Quota Phase Out
Chan Man Hin Eve, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
Dr. Kin Fan Au, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
Dr. Ka Fai Choi, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
With the ending of the Multi-Fibre Arrangement (MFA) and the demise of export quotas, the global apparel industry would have significant economic consequences. Commentators routinely argued that Chinese apparel exporters would surge in the apparel market. It was also expected that other East Asian apparel nations would suffer declines, leading to job and economic losses. This paper presents the predicted trends of the apparel sector and focuses specifically on China and other East Asian suppliers, which include Hong Kong, South Korea and Taiwan. The double exponential smoothing method is used to perform a three-year forecast on the future scenarios of this sector in the selected countries. The results show that China’s gains outstrip those of the other East Asian exporters by a considerable margin. It is hoped that the results obtained will serve as references for the industry and government in their continuous promotion and development of the changing market circumstances. World apparel and textile markets have been protected from competition by quotas under the Multi-Fibre Agreement (MFA) for decades. These quotas not only placed an overall limit on the growth of imports from restricted suppliers, but also fixed market shares between suppliers through country specific quota systems (Nordas, 2004). In the heydays of the MFA, East Asian countries considered competitive in the apparel sector, were determined their exports by the developed countries. Until the Uruguay round under the WTO in 1995 which agreed that quotas would be phased out in a ten year timeline. In 2005, the quotas system had come to an end and apparel trade became liberalized. This was heralded as a new start for apparel trade and was forecasted to increase market shares significantly. There is a general belief that some key Asian apparel suppliers would have substantial changes in exports in the quota-free environment. Of particular concern was China, as a highly competitive producer, would surge in the world market. On the contrary, other East Asian suppliers may have suffered declines. The apparel industry is central to the global economy, exports value amounted to US$3110 bn in 2006 (WTO, 2006). It has played an important role in Asia; initially in Hong Kong, South Korea and Taiwan, and more recently, China. This sector has created millions of jobs and contributed to economic growth; especially China and Hong Kong, trade values of apparel account for, 10% and 9 % respectively, of their total industrial good exports. Hence, their export earnings rest with a relatively high degree of dependency on apparel production. It is believe that the elimination of quotas benefited East Asian producers since they are able to provide value chains and a full-package production (Gereffi, et al., 2005). Notably China is the most dominant export leader after liberalization. On the other hand, Hong Kong has lost its competitiveness in making medium priced apparel products as the production costs are too high. Therefore, this industry should concentrate on developing the high-end market internationally. South Korea and Taiwan continue to retain a small, but still significant exports of relatively high-value and niche apparel items in which quality, product development, timely delivery and related services are at a premium. In an effort to better understand the dynamics of global apparel exports; this paper predicts the future scenarios of East Asian suppliers after trade liberalization. It examines specifically China and the other East Asian exporters, including Hong Kong, South Korea and Taiwan. These countries are considered since China is the largest exporters and the three East Asian suppliers are the initial suppliers back to 1980s and until nowadays they still maintain a significant export of apparel products to the world. This paper focuses on apparel because the Agreement on Textiles and Clothing (ATC) was generally selected to be the most restrictive for apparel and relevant to consumers while textiles imports are destined for the industry. Since 1990s, the apparel sector has increased faster than textiles and in 2006 represents 60% of the global apparel and textile trade (WTO, 2006). Taking data, after the MFA regime, from 1995 to 2007, the double exponential smoothing method is used to make a three-year forecast analysis (2008-2010). It also helps to analyse the competitive advantage towards these selected countries as well as initiatives taken by their governments to support the apparel sector. Since diversity between nations reflects different environmental conditions, which in turn affects the strategies and challenges facing this industry. Finally, recommendations are proposed for specific countries in this free world of apparel trading. Quantitative data was taken into account in the forecasting process. The forecasts are based on historical performances of selected East Asian countries in the world over the period of 1995 to 2007. Data for the years of 1995 to 2006 are obtained from http://comtrade.un.org. China’s exports value to the world in 2007 are obtained from http://www.customs.gov.cn. The export values in 2007 for Hong Kong, South Korea and Taiwan are sourced from http://www.censtatd.gov.hk, http://www.kosis.kr and http://www.moea.gov.tw respectively. Double exponential smoothing is a robust approach to forecasting which depends on the presence of trend. It is widely used in business for forecasting demand for inventories (Gardner, 1985) and also performed well in forecasting competition against more sophisticated approaches (Makridakis & Hibon, 2000
Carrying out a Business Development Project: An Empirical Study on Methods and Stages of the Process
Dr. Katri Ojasalo, Laurea University of Applied Sciences, Espoo, Finland
Major changes in the global economy, in social structures and in business have brought significant challenges for developing business studies. As the environmental pressure to reform business models and to be innovative in business increase, it is important that the students learn to be developers. However, students are usually still taught only traditional scientific research methods and processes instead of development methods and processes. The purpose of this study is to increase the knowledge of the process and methodological aspects related to business development projects. This study examines 15 real-life business development cases carried out by master’s students. Based on the findings, a systematic business development process is suggested in this article. The article begins with a brief look at changes in the business world, and how education can contribute to new kinds of innovative business competence. Also, differences between scientific research and development projects are briefly discussed. After the introduction to the background of the study, the purpose and the method of the study are described. In the findings, the systematic development process is introduced. Then, the final conclusions are drawn. The technologies, organizational models, and market and demand structures of the global economy have experienced thorough changes. Increased globalization, tougher competition, the rapid development of information and communication technology, the shortening life cycle of products and services, and increasing customer demands are features of the economic environment. As a consequence, innovativeness has become a major competitive factor for companies, networks and regions. As society shifts from the first, technology-focused stage of the information society to the second phase, which is based on customer needs, the customer orientation will become another big competitive advantage. The customer-oriented innovation of products and services directs companies’ operations and marks their core processes. The society and business operations are increasingly linked to possessing and managing information. The amount of information is growing so fast that companies need new ways of thinking. This creates completely new opportunities, so renewal is very important for existing businesses. Value chains no longer consist just of flows of physical goods; instead such chains are accompanied by information flows within and between companies. The functions of value chains are being reorganized, breaking down traditional, simple and consequential divisions. Companies must be able to operate in real time and according to the demands of the business environment, so instead of hierarchical structures they are increasingly adopting dynamic internal and external networks. More and more organizations consist of groupings of project teams brought together for various tasks – teams of varying sizes, configurations and durations. In consequence, competitiveness is fostered by developing the organization’s competences, by purposefully building new competences and by influencing whole value-generating systems (e.g. Kivelä and Ojasalo, 2007; Hämäläinen, 2006; Ruokanen, 2006; FinnSight 2015-report). According to contemporary views, business competence is the ability to establish a company’s operations in its operating environment proactively, anticipating changes in the operating environment. It is also the ability to develop the leadership and earnings principles that give the company a competitive edge in its environment, taking into account the company’s strategic success factors. Also necessary is the capacity for building and leading networks and processes with partners, aiming to attain shared objectives (Näsi and Neilimo, 2006). It is important that a business management curriculum aims to produce broad business competence, as well as the abilities required by a rapidly changing operating environment: innovativeness and capabilities related to information management, foresight and change management. The idea is that graduates in business management should have strong general competence in business and can apply it to developing and reforming the operations of various organisations. Expertise in the field relates increasingly to redefining entire value-generating systems. Value generated for customers is examined from the perspective of a broad selection of services produced by an entire network, in which innovations are increasingly important (Kivelä and Ojasalo, 2007). In developing business studies, the main challenge lies not only in renewing the outcomes and contents of teaching to better respond to the future needs of the business environment; the changes in the environment are so large that the curriculum should also find new ways to facilitate the continuous development of the region and of business through research and development. The aim of the curriculum is therefore to create flexible conditions for the content of learning to arise from practical activity, rather than activity receiving its content from the curriculum (Kivelä and Ojasalo, 2007). If a curriculum is to produce new kinds of holistic business competence, learning must be built around extensive entities that correspond to reality and are tied to a genuine business environment (cf. Aurand, DeMoranville and Gordon, 2001). Development projects and the business context should be included in the learning process from the very start of the studies. Students use projects to practise tasks related to the substance of the profession; knowledge is built through learning by developing and through the critical evaluation of activities. Learning in projects requires an active approach, commitment, a combination of theory and practice, collaboration, the sharing of expertise in teams, problem-solving skills and reflection. During the projects, students learn about things in the contexts in which the knowledge will later be used. Students encounter problems that have not been predefined, having to face challenges bravely and work self-directedly. Proper project work requires flexibility from the curriculum. Students should have the opportunity to work on long processes, in which it is essential to understand things holistically, identify and solve problems (cf. e.g. Vesterinen, 2001). During project work, students build networks with many partners. The most intensive collaborations can lead to the creation of a networked school (cf. the ‘networked business school’, Moratis and van Baalen, 2002) and learning alliances (cf. Makri 1999). When permanent partnerships are built, companies become learning partners: they achieve organisational learning at the same time as the students learn by solving their problems (cf. Thos 2002). One of the main benefits of the networked school is its ability to react quickly to changes in the environment (Helakorpi 2001). Increased collaboration between education and business also creates good conditions for increasing entrepreneurship (Leitch and Harrison, 1999). The social networks created during studies are essential from the point of view of creating new entrepreneurship: while working in these networks, students identify new opportunities and the threshold for starting out as entrepreneurs is lowered.
Developmental Challenges in Executive Information System (EIS) for the Education Sector in Pakistan
Dr. Roshan Shaikh, IQRA University, Karachi, Pakistan
Syed Rashid Ali, IQRA University, Karachi, Pakistan
Educational Executive Information Systems (EEIS) are topical manifestation of computer-based information systems with the intention of providing educational executives with the intelligence they require to make strategic decisions. In recent years a number of organizations have implemented Executive Information Systems (EIS) in order to improve their business policy, planning and monitoring functions. This paper examines the adoption and usage of EEIS by the education sector in Pakistan. An indigenous model is presented in this paper reflecting the proposed EEIS that includes protocols, specifications, algorithms and interfaces while special considerations are given to cater for security issues in the proposed model. In order to identify the most critical design factors for a successful EEIS a focus group study was conducted. The preliminary results suggest that there is a dire need of comprehensive EEIS, preferably offered as an outsourcing service. The study has led us to believe that the most critical implementational issue are not the funding challenges alone. In a successful implementation of EEIS, the challenges include: technical expertise, departmental and organizational culture, strategic framework, operational priorities, HR, and administration policies. EIS describe systems that are used by senior executives on a regular basis to access strategic information for policy and planning purposes.. EIS is a special class of network based applications which meet specific scaling requirements of Database and Data Warehousing functions . EIS is defined as a computerised system that provides executives with on-line easy access to internal and external information relevant to their critical business success factors (Rainer and Watson, 1995) . The aim of EIS is to bring together information from the contextually relevant but from external sources and the indigenous data from all parts of an organization and present it in a way that is meaningful to executive users. The motivation for this work is marked by the limited research on the examination of EEIS and their imlementational issues for the education sector in Pakistan. In this work an attempt is made to address the following burning challenges: Why EEIS is necessary for education sector in Pakistan? What is the current status of EEIS in education sector of Pakistan? How to develop the EEIS Model? What are the implementational issue of proposed EEIS in education sector of Pakistan? The primary purpose of proposed EEIS is to support managerial operations and learning of the organization, its work processes, and its interaction with the external environment. Typically EEIS will be helpful in the following areas: Timely information, more efficient reporting systems tailored to executive’s information needs, rapid status updates and better mechanism to support decision making, access broad range of internal and external data and aggregated reports, provide extensive on-line analytical tools including trend analysis, exception reporting & “drill-down” capability, and present information in a graphical form.  Development and coordination of International, National and Provincial education funding policies, plans and programmes with respect to requirements, interventions and monitory framework. Development of curricula and text books. Increase the boys and girls gross primary, middle, secondary and higher secondary school enrolment. Provide Human Resource Management (HRM) and Supply Chain Management (SCM) Services. Rehabilitation of primary, middle, secondary and higher secondary schools. Allocate financial, human and material resources to meet effectiveness and efficiency criteria. Development of policies for teachers’ training, and much more. Since EEIS model proposed should cater for specific requirements of the educational sector, it became necessary to collect relevant data that would help to facilitate the design process. Figure-1 depicts a proposed basic general model of an organizational EEIS. It shows that management collects the internal transactional data, and synthesize with the data from the external sources retrieved and stored in the organizational database. Information obtained from these two sources can then be transformed and/or manipulated into EEIS formats that facilitate the development of management reports and answers to the queries. The proposed EEIS model considers following assumptions: 1.This model works when implemented, operated and maintained mostly by 3rd party which is neutral and unbiased from cultural and non-technical internal influence. 2.The validation and certification of the proposed EEIS model has to come from the top management in collaboration with the technical consultants. Internal Transaction Data: Internal transaction data received and generated from different department of education sector, such as planning & policies, HRM, SCM and financials etc. External Data: External data are the data and results generated from outside education sector, such as govt. policies, market surveys, International donor agencies and NGOs. Management Subset: It involves the modification and improvement in the external and internal data as per required information to executive users. Data Transformation: Data transformation involves the processes for selecting, targeting, converting and mapping data, so that it may be used by multiple systems.
Constructing of Factor Indices: A New Approach
Sirli Mandmaa, Tallinn University of Technology, Tallinn, Estonia
Dr. Jaan Vainu, Tallinn University of Technology, Tallinn, Estonia
The theory of indices is one of the youngiest branches of statistics. The theory of aggregate indices, which is presently widely used in statistics, was developed in 1871 by Etienne Laspeyres, professor at the University of Tartu and Hermann Paasche, German economis, a proffessor at Aachen University of Technology from 1879. From then on indeces have been widely used in economic analyses and the fundamentals of their construction have remained unchanged: the factors included in aggregate fall into qualitative and quantitative ones and the period (last or base period) used in the formula of index of invariable factor is determined considering the character of the variable factor. Such a fixed order of the change of factors is of course not substantiated scientifically; however, otherwise it would not be possible to make the cost, price and physical quantity form a system. This creates another problem: division of absolute increases. If we have to divide cost increase between the change in price and quantity, the change of price ( qualitative factor) will get a larger than the actual portion and quantity (a quantitave factor), a smaller portion. Several methods have been suggested for overcoming this problem but these have not been applied in practice. No clear consensus has emerged on who created the first price index. The earliest reported research in this area came from Englishman Rice Vaughan who examined price level change in his 1675 book “A Discourse of Coin and Coinage”. Vaughan wanted to separate the inflationary impact of the influx of precious metals brought by Spain from the New World from the effect due to currency debasement. Vaughan compared labor statutes from his own time to similar statutes dating back to Edward III. These statutes set wages for certain tasks and provided a good record of the change in wage levels. Rice reasoned that the market for basic labor did not fluctuate much with time and that a basic laborers salary would probably buy the same amount of goods in different time periods, so that a laborer's salary acted as a basket of goods. Vaughan's analysis indicated that price levels in England had risen six to eight fold over the preceding century (Chance, 1966). While Vaughan can be considered a forerunner of price index research, his analysis did not actually involve calculating an index. In 1707 Vaughan's fellow Englishman William Fleetwood probably created the first true price index. An Oxford student asked Fleetwood to help show how prices had changed. The student stood to lose his fellowship since a fifteenth century stipulation barred students with annual incomes over five pounds from receiving a fellowship. Fleetwood, who already had an interest in price change, had collected a large amount of price data going back hundreds of years. Fleetwood proposed an index consisting of averaged price relatives and used his methods to show that the value of five pounds had changed greatly over the course of 260 years. He argued on behalf of the Oxford students and published his findings anonymously in a volume entitled Chronicon Preciosum (Chance, 1966). A number of different formulas, at least hundreds, have been proposed as means of calculating price indexes. While price index formulas all use price and quantity data, they amalgamate this data in different ways. A price index generally aggregate using various combinations of base period prices (p0),later period prices (pt), base period quantities (q0), and later period quantities (qt). Price index formulas can be framed as comparing expenditures (An expenditure is a price times a quantity) or taking a weighted average of price relatives (pt / p0). Unweighted indexes: Unweighted price indexes or elementary price indexes only compare prices between two periods. They do not make any use of quantities or expenditure weights. These indexes are called “elementary” because they are often used at lower levels of aggregation for more comprehensive price indexes (PPI Manual, 2004). At these lower levels, weights do not matter since only one type of good is being aggregated. Carli; Developed in 1764 by Carli, an Italian economist, this formula is the arithmetic average of the price relative between a period t and a base period 0. (1) Dutot: In 1738 French economist Dutot proposed using an index calculated by dividing the average price in period t by the average price in period 0. (2) Jevons: English economist Jevons proposed taking the geometric average of the price relative of period t and base period 0 (PPI Manual, 2004). When used as an elementary aggregate, the Jevons index is considered a constant elasticity of substitution index since it allows for product substitution between time periods (PPI Manual, 2004). Harmonic mean of price relative: The harmonic average counterpart to the Carli index. The index was proposed by Jevons in 1865 and by Coggeshall in 1887(PPI Manual, 2004). (4) Carruthers, Sellwood, Ward, Dalén index: Is the geometric mean of the Carli and the harmonic price indexes. In the 1922 Fisher wrote that this and the Jevons were the two best unweighted indexes based on Fisher's test approach to index number theory (PPI Manual, 2004). (5) Ratio of harmonic means: The ratio of harmonic means or "Harmonic means" price index is the harmonic average counterpart to the Dutot index (PPI Manual, 2004). (6) Laspeyres index: The most commonly used index formula is the Laspeyres index which measures the change in cost of purchasing the same basket of goods and services in the current period as was purchased in a specified base period. The prices are weighted by quantities in the base period. Paasche index: The Paasche index compares the cost of purchasing the current basket of goods and services with the cost of purchasing the same basket in an earlier period. The prices are weighted by the quantities of the current period. This means that each time the index is calculated, the weights are different. Formulae with changing weights, such as the Paasche Index, involve the collection of substantial additional data, since information on current expenditure patterns, as well as prices, must be obtained continuously. Usually, the time taken to process current expenditure data and to derive revised weights precludes the preparation of a timely Paasche index. Because of the difference in the weightings used for the Laspeyres and Paasche indexes (Laspeyres uses base period weights, Paasche uses current period weights), the two indexes will produce different results for the same period.
Management of Projects Financed by EU Programs in Croatia
Ana Bulic, University of Zagreb, Croatia
Maja Klindzic, University of Zagreb, Croatia
The paper deals with integrations as modern tendencies in global economy. Specifically, the focus is on integration of the European countries, or to be more exact, on instruments of pre-accession help to potential candidates. Today, the EU consists of 27 member states, and there are three candidate countries, Croatia being one of them. Candidate countries are granted pre-accession funds enabling them to catch up with the member states to some extent. The main purpose of this paper is to give an insight into how projects financed from the EU pre-accession funds are managed in Croatia. More than ever we have become dependent on each other. It has become clear that not a single state is self-sufficient and therefore cannot function as an island for itself. The reason for interdependency between states results from a well-known phenomenon that shapes our lives – a phenomenon called globalization. „Globalization is not just another phenomenon or a pass-through trend. It is an international system which overhangs and shapes both domestic and foreign affairs of almost all countries in the world. As such, we should comprehend it and embrace it. “(Jovancevic, R., 2005). One of the most far-reaching consequences of globalization is probably the proliferation of integration between countries in the world in general. The European Union, as probably the most well-known example of an economic supranational organization consisting of 27 member states, was a starting point for this paper. The European Union is not the only example of such integration. There is a great spectrum of organizations that became aware of the advantages that this kind of integration can (and did) bring to them. One should mention the OECD (Organization for Economic Cooperation and Development) consisting of the world’s most developed countries whose founders were 18 European states, the United States and Canada. OECD brings together the governments of countries committed to democracy and the market economy from all over the world. Its goals are as follows: (1) to support sustainable economic growth, (2) to boost employment, (3) to raise living standards, maintain financial stability, (4) to assist other countries' economic development, (5) to contribute to growth in world trade. (1) NAFTA (North American Trade Agreement) is another example. Under the NAFTA, all non-tariff barriers to agricultural trade between the United States and Mexico were eliminated. In addition, many tariffs were eliminated immediately, with others being phased out over a period of 5 to 15 years. This allowed for an orderly adjustment to free trade with Mexico, with full implementation beginning on January 1, 2008. (2) If we move away from both Americas and Europe, we can find more examples in other parts of the world. ASEAN (Association of Southeast Asian Nations) gathers 10 South Asian countries with a population of around 500 million. The ASEAN Declaration states that the aims and purposes of the Association are: (1) to accelerate economic growth, social progress and cultural development in the region and (2) to promote regional peace and stability through abiding respect for justice and the rule of law in the relationship among countries in the region and adherence to the principles of the United Nations Charter. (3) In Africa, there are also several examples of integration such as AMU (Arab Maghreb Union), CEMAC (Central Economic and Monetary Union) both of which objectives are to achieve full economic union, or COMESA (Common Market for Eastern and Southern Africa) with the specific objective to achieve the common market. (4) The previously mentioned examples show that benefits enjoyed by the member countries of the economic integration can be summarized in the following way: the general opinion in economic theory is that economic integrations provide better resource allocation within an integrated area, which increases the prosperity of all the integrated nations (Jovancevic, 2005). Integration of European countries has a long history. On 25th March 1957, two treaties were signed in Rome that gave birth to the European Economic Community (EEC) and to the European Atomic Energy Community (Euratom): the Treaties of Rome. The signatories of the historic agreement were France Netherlands, Belgium, Luxemburg, Italy and Federal Republic of Germany. The Treaty establishing the EEC affirmed in its preamble that the signatory states were"determined to lay the foundations of an ever closer union among the peoples of Europe" (O´Leary, 2007). In this way, the Member States specifically affirmed the political objective of a progressive political integration. Ever since 1957 the community continued to grow and the European Union was established in Maastricht in 1992. Today, the EU consists of 27 member states, and three countries more were granted a candidate status, including Croatia. During the accession period, a candidate country is obliged to meet certain prerequisites which will not be discussed in this paper. Potential candidates and candidate countries are granted help through pre-accession funds that enable them to catch up with developed European countries. The main purpose of this paper is to give an insight into how the projects financed from the EU programs are managed.
Social Intelligence and Project Leadership
Velimir Srica, University of Zagreb, Croatia
Some ten years ago I was involved in a research study which came up with a relatively disturbing conclusion: 80% of projects fail not because we did not know how, but because of lack of social intelligence and personal skills, i.e. poor leadership, bad teamwork, inadequate communication, inability to resolve conflicts etc. In other words, in projects we rarely fail because of lack of professional skills and knowledge, and most often fail as humans. This conclusion is a starting point for discussing the human side of project management in order to make it better suited for success within a project environment. Most project managers come from technical background and exhibit engineer mentality. In principle they are accustomed to organized, predictable, logical, well structured, detailed and standardized environment which is governed by objective rules and controllable variables. No wonder they tend to apply the same logic to fuzzy, disorganized, unpredictable, intuitive, emotional and subjective world of human interaction. It works with technology but will not work in most situations involving people. Instead of an ideal paper-based system which does not correspond with the reality, a flexible and dynamic system is needed which adjusts, grows and develops like an organism. A fairly functioning system is always better than a perfect system which does not function. Whenever applying complex and highly standardized project management frameworks, we must remember that plans, standards, methodologies or software are not the goals, they are just means. The success or failure depends almost entirely on the human side of any project. As a Columbia University MBA student, decades ago, I came across the following poem: IN BROKEN IMAGES: He is quick, thinking in clear images. I am slow, thinking in broken images. He becomes dull, trusting his clear images. I go on, mistrusting my broken images. Trusting his images, he assumes their relevance. Mistrusting my images, I question their relevance. Assuming their relevance, he assumes the fact. Mistrusting their relevance, I question the fact. When the facts fail him he questions his senses. When the facts fail me I turn back to my senses. He continues, quick and dull through his clear images. I continue slow and sharp through my broken images. He in a new confusion of his understanding. I in a new understanding of my confusion. We are living in the world of confusion. Project Management is no exception. We are living in the world of scientific clarification. Project Management is no exception. We are living in a world of growing complexity. Project Management is no exception. We are living in a world of successful simplification. Project Management is no exception. We are living in a world of permanent crises. Project Management is no exception. We are living in the world of constant improvement. Project Management is no exception. The improvement, simplification and clarification are the outcomes of best practices, standards, research and overall project management development. On the other hand, confusion, crisis and complexity can be attributed to our mental programming, more simply to our attitudes, value systems, educational tools and problem solving approaches that are still sometimes inadequate. Thomas Edison used to say that if you cannot solve a problem, you must change it. You must redefine it; see it differently and maybe than you will be able to solve it. However, we are accustomed, educated, mentally programmed and trained to look at problems pretty much the way everybody else does, based on prevailing knowledge-science paradigms, using a term coined by Thomas Kuhn (1962) in his famous book on the structure of scientific revolutions. One of the key problems lies in the fact that we enjoy seeing things clearly. We like our (project management) world to be structured, organized, rational and predictable. The reality, on the other hand, seems to be quite different. More often than not, our clear images fail us and we end up being confused. In a recent research at School of Economics and Business (2007) we have concluded that our alumni apply on the average a little above 10% of knowledge and skills they have learned while studying for their college level degree. The remaining 90% was acquired outside of the formal educational system. I am fond of one of the HRM hypes: Hire for attitude, train for skills! In principle, it means that knowledge and skills are of secondary importance. It also means that we are in a growing need of people with attitude. However, in project management environment, we are still mostly concentrated on the less important thing, the standards, methodologies and approaches, bodies of knowledge and necessary skills. How about the primary thing, the attitudes? It reminds me on the typical approach to educating modern (project) managers, the stuff my colleagues and I do as professors of management. We are producers of MBAs. What is a Master of Business Administration? By the very name of the degree, it is a person trained to administer the existing business system or to run a well structured project. Traditionally, the MBA education is aimed at producing people with a set of specific skills, allow me to metaphorically call them ISO-guys (the people engaged in introducing quality standards to an organization); they are analytical, pragmatic, rational, structured and organized. They are accustomed and trained to search for clear images, to plan, organize, control and evaluate their project in a well structured and standardized way. Imagine, for the sake of an argument, another approach, which could be named MBI (Master of Business Innovation); an attempt to train the anti-ISO-guys, helping them develop an attitude of leadership, creativity, reengineering, innovation, harmony, social and emotional intelligence, multiculturalism… Such a program is in its experimental phase at the University of Zagreb, Faculty of Economics and Management and I am in charge of its development and implementation. The project management methodology is MBA-like, and not MBI-like; in principle, it stays focused on building a framework for improving our professional knowledge and skills, leaving our attitudes and value systems aside. What are the attitudes and values which seem to be contributing significantly to successful project management?
Executive Coaching in a Family Business Environment
Leon Levin, Gil Bozer, Monash University, Melbourne, Australia
Dr. Hartel Charmine, Monash University, Melbourne, Australia
Within the traditional business organizational climate in which an executive coach operates, the identity of the coachee can be quite clearly differentiated from the business identity. This is not the case within the world of family business where the founder, the successor, the business, and the family culture are interwoven. This unique feature of family business means that for executive coaching to be effective within the family business environment a radically different approach to that used in traditional business environments must be adopted, namely the consideration of what generally are thought of as non-business variables. This paper makes the first attempt to address the key and unique variables executive coaches need to be aware of to effectively work within the family business environment. The foundation stone upon which this paper is predicated is the fact that in most, if not all, evolving economies the influence of family businesses is extremely important. Gersick et al (1997) and Barnett et al (2006) acknowledge that family businesses are perhaps the dominant form of enterprise worldwide as more than two of every three organizations are family owned and/or managed. Lee (2006) agrees saying “…the proportion constituted by global business enterprises that are owned or managed by families is estimated conservatively to be between 65%–80%. In the United States, approximately 50% of the gross national product is generated by family businesses…the proportion of family firms in the United Kingdom and in the European Union is estimated to be 75% and 85%, respectively (p.175). We are reaching a period where many family business owners/founders are facing a crisis. This is because many are baby boomers and as such approaching retirement. To illustrate, a Canadian study undertaken by the University of Waterloo (1999) reveals that in the coming years Canada’s family business leaders will be retiring in significant numbers: 27% in the next 5 years, 29% in 6 to 10 years, and a further 22% in 11 to 15 years, leading to a potential succession crisis. Ip et al (2006) underscores the challenges succession presents for family businesses with respect to survivability in citing that only 5% to 15% of European family businesses reach the third generation, and 30% of closures may be considered transfer failures. This crisis situation requires an intervention that assists family business founders in successful traversing the unique challenges family business poses. We suggest in this paper that executive coaching may provide the necessary vehicle to undertake this task. Nonetheless, the special case family business represents requires a radical reconceptualization of the role, skills and scope of the executive coach. We advance such a reconceptualization beginning with a definition of family business and coaching, the unique issues family business poses for coaches, followed by presentation of our framework for executive coaching within the family business environment. The breadth of scope that constitutes a family business is as broad and diverse as the range of businesses that families are involved in. Gaining a clear understanding of what a family business is, largely determines the nature of the research, and it is a critical factor within the interplay of an external executive coach. Klein et al (2005) noted that when it came to defining a family business, depending on the definition the variance can be between 15% and 81%. Westhead and Cowling (1998) supported the “grayness” of definition, when they observed that in a study of 427 firms, 78.5% would be defined as family business based upon one definition, whereas when a more restrictive definition was applied only 15% of the very same firms in their sample were classified as family businesses. Smyrnios et al (2003) reported that there were over 20 definitions of what constitutes a family business, in itself a challenging conundrum, but even more so in attempting to gain an understanding of key family/business drivers that are critical in an effective and targeted executive coaching intervention. This diversity of definitions for a family business suggests that researchers have struggled to gain a clear and concise definitional framework of what constitutes a family business. However, strategically in an early study by Davis (1983), an understanding of the uniqueness of a family business was established as the “…interaction between two sets of organizations, family and business, that establishes the basic character of the family business and defines its uniqueness” (p.47). A latter study by Dunnerman et al (2004) expanded this interactive relationship by acknowledging the relative impacts of both sub-systems when they identified a business to be a family business when the “…family dynamics and business dynamics demonstrably interact and influence each other. Then they contend that a synthesis exists between the two, meaning the emergence of a new and unique system identified as a ‘family business’…” (p.7). Steier et al (2004) acknowledged strategic definitional challenges when they stated that “… (a) while there is no universally accepted operational definition of a family firm there seems to be a theoretical consensus that a family’s ability and intentions to influence business decisions and behaviors are what distinguish family and non-family firms, and (b) a family’s influence on a business is manifest in different ways be it the manner in which succession, innovation, culture, or agency issues are handled...” (p. 296). Klein et al (2003) attempted to define the “familiness” of a business by quantifying on a scale of a family’s influence on a business. The scale that was applied looked at the influence of (a) family power, (b) family influence and (c) family culture on the management and leadership of a business. Other researchers looked at content (Anderson & Reeb, 2003; Handler, 1989; Heck & Scannell,1999; Littunen & Hyrsky, 2000; Litz,1995), ownership (Lansberg, Perrow, & Rogolsky, 1988), management involvement (Barnes & Hershon, 1976), generational transfer (Ward, 1987), intended generational transfer (Barach & Ganitsky, 1995; Heck & Scannell Trent, 1999; Ward, 1988), and family business culture (Chua, Chrisman,& Sharma, 1999; Dreux IV & Brown, 1994; Litz, 1995) as determinates as to what constitutes a family business.
Innovation Management in Knowledge Intensive Services
Professor Jukka Ojasalo, Ph.D., Laurea University of Applied Sciences, Espoo, Finland
Knowledge intensive services and their innovation management are increasingly important in the modern economy. The vast amount of literature deals with innovation management in the context of tangible goods, however very little information exists on innovation management of knowledge intensive services. The present article contributes by proposing a framework for innovation management in knowledge intensive services. The framework integrates into a single model the special characteristic of service innovation management, knowledge intensive services, and service innovation process. The role of knowledge intensive services and their innovation management is becoming increasingly important in the modern economy. According to Hipp and Grupp (2005), the trend towards a knowledge-intensive economy supports structures in which human capital and knowledge-intensive business service companies, in particular, play an important role as knowledge brokers and intermediaries. Data, information, and knowledge are intangible assets that are produced and traded especially by the service sector (Miozzo and Miles, 2003). The efficient distribution and utilisation of knowledge requires supporting functions (David and Foray, 1995), in other words knowledge intensive services. Knowledge intensive services are both highly innovative themselves and also facilitate innovation in other economic sectors (Den Hertog and Bilderbeek, 1997). The literature includes vast amount of knowledge of innovation management in the context of tangible goods. Innovation of services is clearly less investigated area than innovation of goods. However, service innovation has also attracted attention of researchers during the past decades (e.g. Donnelly, Berry & Thompson, 1985; Johnson, Scheuing & Gaida, 1986; de Brentani, 1989; Scheuing & Johnson, 1989; Grönroos, 1990; Edvardsson, Gustafsson, Johnsson & Sandén, 2000). Still, the research of innovation management in knowledge intensive services, which are a specific type of services, is still in its infancy. Indeed, there is an evident need to increase the knowledge of innovation management in this type of services. The present article responds to this need by proposing a framework for innovation management in knowledge intensive services. The developed framework is based on an extensive literature analysis in the areas of service innovation and knowledge intensive services. The structure of this article is as follows. First, it identifies the special characteristics of innovation management in services in general. Then, it identifies the special characteristics of knowledge intensive services, which are also often called as professional services. Then, it identifies the phases of innovation process in the context of services, and develops a related model which is later used in the proposed framework. Then, based on the above literature analysis, it proposes a framework for innovation management in knowledge intensive services. This framework contributes to the literature by integrating the special characteristic of service innovation management, knowledge intensive services, and service innovation process into a single model. Goods and services have certain fundamental differences. Services are intangible, heterogeneous, perishable, and they are produced and consumed simultaneously. According to Parasuraman, Zeithaml and Berry (1985), these distinctive characteristics have various implications to services management. Because services are intangible they cannot be inventoried or patented. Services cannot be readily displayed or communicated. Also, the pricing of services is difficult. Since services tend to be heterogeneous, instead of being standardized, service delivery and customer satisfaction depend on employee and customer actions. Indeed, service quality depends on many uncontrollable factors. Also, due to heterogeneity, one cannot be sure that the service delivered matches what was planned and promoted. Because services are produced and consumed simultaneously customers participate in and affect the service process and transaction. Also, customers sometimes affect each other during the service. Moreover, the simultaneous production and consumption often makes decentralization essential and mass production difficult. Because services are perishable, it is challenging to synchronize supply and demand. Also, services cannot be returned or resold. Indeed, these are general implications to services management caused by the characteristics of services. What are the implications of characteristics of services particularly for service innovation management? Only a small number of the researchers have attempted to integrate findings from both services management and new product development literature in order to shed light on how the characteristics that distinguish services from physical goods may impact on the development and performance of new services (Langeard and Eiglier, 1983; Easingwood, 1986; Easingwood and Mahajan, 1989; de Brentani, 1989, 1991). The characteristics that distinguish services from physical goods have various implications for new service development (see Berry, 1980; Shostack, 1984, 1987; Easingwood, 1986; Wind, 1982; Gummesson, 1981; Lynn, 1987; Jackson and Cooper, 1988; Maister and Lovelock, 1982; Easingwood and Mahajan, 1989; Levitt, 1976). De Brentani (1995, p. 102) summarized these implication as follows. Intangibility. Intangibility is often a challenge to marketers of new services and requires close interaction with customers. It also often requires successful use of tangible evidence to help explain or portray the service. Intangibility can simplify and shorten the new product development process for services. Intangibility often allows for quick reactions to changed customer needs, but also risks haphazardness in service design. Due to intangibility sustainable advantage and a proprietary position are difficult to achieve for services, for example through patent. This results in a proliferation of similar services and diminished incentive for the innovator to invest time and resources in truly pioneering efforts in service development. Simultaneous production and consumption. Services are often produced and consumed in the presence of customers and may require substantial interaction with the client both at the time the service arrangement or relationship is first established as well as at later stages during the relationship. Sometimes customer relationships are very complex and long-term. Customer satisfaction is linked to both the outcome of the service as well as the process by which it is produced and delivered. Consequently, for new services, production and delivery are integral facets of what customers purchase, and thus the successful service design is likely to require involvement from many different functional specialties within the firm.
A Probe into the Interrelationship of the Personality Characteristics, Value at Work, Commitment to Organization and Culture of Organization vs. Intent to Quit, Taking a Certain Medical Treatment System in Taiwan for Instance
Mao-Hung Liao, Cardinal Tien Hospital, Taiwan, R.O.C.
Ching-Kuo Wei, Oriental Institute of Technology
Hsien-Mi Lin, Cardinal Tien Hospital, Taiwan, R.O.C.
Where the unemployment rate promulgated in Taiwan has been hanging high, the medical care industry has, ironically, either failed to solicit adequate high-caliber professionals and experts to meet the needs or has undergone continual high-caliber brain drain. In medical institutions which are characterized by high level professionalism, the high quitting rate would not only bring up added costs in human resources training and nourishment, but would also downgrade the quality of services to patients. A hospital should, therefore, try by all means and channels to solicit and screen high-caliber professionals and experts and, in addition, take more prudent consideration about how to minimize employees’ intent to quit so as to effectively prevent and guard against quitting trend. The present study is intended to probe into the factors that tend to affect employees’ intent to quit in an attempt to look into the initial causes which tend to lead to the intent to quit. The findings yielded in the present study is intended to benefit hospitals in human resources management. The present study aims at the entire staff of a certain medical care system in Taiwan area. We conducted the surveys by means of random sample-check and questionnaires. We handed out a total of 1,400 copies of questionnaires, retrieved a total of 1,308 copies, including 1,247 copies as the successful ones, demonstrating a 95.4% successful rate. The questionnaire was designed in closed type questions, including, other than personal fundamental particulars, five major facets, i.e., personality characteristics, value at work, commitment to organization, culture of organization and intent to quit. The retrieved questionnaires were inspected and verified by means of descriptive statistics, T-inspection, variable analysis, Pearson’s Product-moment Correlation Analysis and regression analysis and such statistical methods. The findings yielded out of the study indicate that different age, marital status, education level, service seniority accumulated in service of the Hospital, service seniority in employment, personal monthly salary, position categories, hospital level and influence of hospital in service upon the employees’ intent to quit demonstrate significant difference in all cases, with employees age 21~40 higher than counterparts age 41~60, unmarried employees higher than married counterparts, employees as college and university graduates higher than counterparts graduated from senior high schools or vocation schools or blow; with employees 1~10 years in service seniority higher than counterparts less than one year and 15~20 years of age, those 1~15 years in service seniority higher than counterparts 15~25 years of age; employees with personal monthly salary as new Taiwan labor amounting to NT$30,000~40,000 higher than counterparts in NT$60,000~70,000 monthly salary, with nursing personnel higher than doctors, medical technicians higher than administrative staff, employees of local hospitals higher than counterparts serving with the regional hospitals, employees serving with case hospital C higher than the counterparts serving with Case hospital A. Moreover, all such factors of personality characteristics, value at work, commitment to organization, culture of organization and intent to quit demonstrate significant negative interrelationship. Among them, self-growth, self-realization, altruistic orientation and intent to quit demonstrate low negative interrelationship. The autonomous orientation and intent to quit demonstrate weak negative interrelationship. Psychological security, anxiety-free orientation and orientation to quit demonstrate weak positive interrelationship. Value, commitment to retain for continual employment and intent to quit demonstrate a moderate negative interrelationship. Commitment to work hard and intent to quit show low negative interrelationship. Innovative culture, senior culture, leaders’ characteristics, characteristics of team members and intent to quit demonstrate low negative interrelationship. hospital management style and intent to quit demonstrate moderate negative interrelationship. Besides, those four facets of personality characteristics, value at work, commitment to organization and culture of organization could forecast in concert up to 36.6% of the intent to quit variation. In terms of the type of variable analysis, the F value comes to 178.725. The overall P value comes to 0.000. The regression model, therefore, shows significant meaning statistically. The regression equation is: Intent to quit＝-0.101× Personality characteristics－0.028 × Value at work＋0.4911× Commitment to organization＋0.167× Culture of organization. In line with the change of social environments and value, employees tend to show increasingly mounting autonomous self-awareness. As a natural result, employees would demand more diverse working environments, payroll and fringe benefits. Where the government promulgated unemployment rate has been hanging high, medical institutions, ironically, run into difficulty soliciting the right human resources to meet the needs and run into continual brain drain. In a hospital, employees’ intent to quit is receiving mounting notice from the hospital management. The term “quitting” as set forth herein denotes outward labor flow from inside of an organization（Huang Ying-chung , 1989）. The term “intent to quit” as set forth herein denotes the inclination or orientation for employees to depart the workplaces. Such inclination or orientation would result directly in the behaviors of departure. The behaviors of employees to quit implicitly suggests the loss in spirit, time, resources and property for an organization in the investment in soliciting, selecting, training programs and replacement. Too higher a rate for employees to quit would bring negative impact upon the employee morale and the hospital image. Meanwhile, it would mean an obstacle against inheritance and pass-on of technology & know-how and experiences as well as the long accumulated competitive edge（Chang Hsiu-ying, 2003）. A medical institution is a highly professionalized organization. A high rate for employees to quit will bring up additional costs for human resources development and will spoil quality of services to patients. A hospital should, therefore, put forth maximum possible efforts to solicit and select high-caliber human resources. How to keep high-caliber professionals and experts to continually stay is a vitally important lesson for the hospital management.
Regional Concurrence and Strategic Moves of MNCs Ensconced South Asian Market in the Current Global Competitive Environment: Impact of Global Business and Political Changes on the Newly Emerging Market of Pakistan
Tahir Ali, University of Karachi, Karachi Pakistan
Renaissance of Asia has been predicted by many international business scholars for many years. Asia has been the fastest growing area in the world for the past three decades and the prospects for continued economic growth over the long run are excellent (Cateora, 2005). Enormous, unprecedented global environmental changes, since the outset of the 21st century have nudged out Asia in general and South Central Asia in particular as the centre of international business and political activities. Although geocentric behavior of most of the international organizations throughout the world has been reflecting in this part of the world, strong regional co-operations may affect their objectives and strategies in the long run. Intensive investment and involvement of neighboring and other regional countries, dilution of some crucial disputes with India and comparatively stable political and economic conditions over the past five years have placed Pakistan as one of the fastest growing economies and a newly emerging market of the world. Alliances, ventures and political and business harmony in this region gradually eclipsing hegemony and compelling multinational and international organizations to redefine their objectives and strategies in Pakistan. In the present scenario of globalization and focus on this region by many developed countries and international organizations, this paper would be highly fruitful for everyone in general and business community in particular. The digital revolution, diminishing of business boundaries, interaction of people and dramatic global political changes across the world for the past seven years have given the message that countries which do not cope with the global political and business environmental changes will be devastated in the coming competitive world. Globalization is now a fact and developing countries have to redefine their strategic moves to successfully convert this threat into opportunity. Asia, - the largest market segment in the world, has been handling this challenge through regional economic integrations at various levels. Pakistan has made tremendous economic growth during the past five years (2002 – 2007), establishing new records of most of the economic indicators. Today Pakistan has been regarded as one of the fastest growing economy of the world. Intensive investment and involvement of many regional, especially neighboring countries, like Middle East and China, softening of relationships and improvement in trade with India and collaboration with US as front line state in war against terrorism have, to a large extent, helped Pakistan in achieving a comparatively stable economic and political position and better reputation throughout the world. Such political and business developments in local market though favor economic growth, employment and better quality of life, challenges do exist for multinational and international organizations to cope with the changing environmental situations more effectively. Knowledge about the current transitional phase of development of this part of the world would be highly beneficial for international business community and scholars in particular and everyone in general. Asia – an affluently growing mass market of around 4 billion people, showing 60% of the world- has been regarded one of the oldest civilizations on the earth, center of business, politics and education and full of culture and diversity. It has been the main supplier of oil, minerals and many other valuables to the world for many centuries. In addition, diligent, energetic and potential human resources have made Asia popular throughout the world. Although mass majority of this region has been deprived of basic facilities and education, things have been improving quite rapidly for the past few years. Today out of 51 countries of Asia, only 12 have above $ 10,000 Purchasing Per Capita, 14 countries posses $ 0.5 Million GDP and only 16 countries register above 50% literacy rate. These statistics, though reflect quite discouraging situation, current regional economic integrations and global political and business environmental developments since the outset of the 21st century endorse a much promising future of this region. Renaissance of Asia has been foreseen by many international business scholars for many years and most of their estimates have come true or even above expectations. “The dynamics of Asian growth provides, without any doubt, the greatest economic opportunity now and in the foreseeable future not only for Asians but also for citizens worldwide”(Valerie, 2007). According to the Economist Intelligence Unit Foresight 2020 Report, between 2005 and 2020 Asia’s share in the global economy is expected to rise to 43% from 35%. Already deeply integrated in the global economy and benefiting from this integration, there are many things that Asian can do collectively and individually to support global economic integration, and thereby improve the functioning of multilateral trade system. The geocentric behavior of most of the multinational and international organizations, tragic 9/11 and subsequent attacks on Afghanistan and Iraq and the development of digital communication network have compelled the Asians to bring harmony in their political and business environments. The dilution of some crucial disputes of India with China and Pakistan have played incredible role in building comparatively more friendly business and political environments in the region. “Over the next twenty years 213 Million Chinese households and 123 Million Indian ones will begin to have discretionary income. That means 1.2 Billion people hitting the world’s consumer market – a shopping space of historic proportions. If both countries continue on roughly their current growth paths, we will witness the creation of massive new consumer markets, as well as unprecedented reductions in poverty” (Diana, 2007). Another encouraging factor in this context is the realization of importance of quality education by most of the Asian leaderships, especially in China, India and Pakistan which will influence the global economy more effectively. “The two Asian giants (China & India) will dominate their region – and perhaps the world’s – economic future. If they can build world-class higher education system, that serves demands for mass access, the need of a sophisticated economy, and active participation in the world knowledge system, their development will be quicker and better sustained” (Philip, 2007).
The Open Loop Economy
Richard Carranza, Consultant, Houston, Texas
The term “feedback loop” is frequently used by economists when describing the economy. The economy is referred to as a natural feedback system. Philosophers throughout history are cited by modern writers as having “discovered” the natural feedback loop in the economy. Analogies are also made comparing the economy to electrical and computer systems in the cyber age. The truth of the mater is that the term feedback loop is used loosely. Actually it is used incorrectly from the point of view of modern engineering control theory. This is because, technically speaking, natural systems, like an economy with no outside intervention, have no feedback loops. Feedback loops exist when man intervenes and sets up sensors that measure and communicate with other devices that then act upon the system. In a simple economy, where humans naturally engage in trade, no such feedback system is in place. The model on which modern economics is built is an open loop system, not a closed loop system. Yet, throughout the economic literature the term “feedback loop” is used freely. Sometimes the term is used loosely, and sometimes it is used precisely. In other instances, a differentiation is made between positive and negative feedback. Nevertheless, the term is found everywhere in the published literature. John Montgomery describes the “invisible hand” of Adam Smith in terms of a feedback loop. He states, “To Smith, the invisible hand was a metaphor for the workings of the market economy in the setting of the institutions of political and economic freedom. . . . Today, what Smith called the invisible hand might be thought of in cybernetic terms as “feedback loops” - for example, as market prices being regulated by negative feedback.” (Montgomery, 1982) Wikipedia, an online encyclopedia, provides an intricate description of the economy in terms of a feedback loop. “As an example, consider the government increasing its expenditure on roads by $1 million, without a corresponding increase in taxation. This sum would go to the road builders, who would hire more workers and distribute the money as wages and profits. The households receiving these incomes will save part of the money and spend the rest on consumer goods. These expenditures in turn will generate more jobs, wages, and profits, and so on with the income and spending circulating around the economy. The multiplier effect arises because of the induced increases in consumer spending which occur due to the increased incomes -- and because of the feedback into increasing business revenues, jobs, and income again.” (http://en.wikipedia.org/wiki/Multiplier_(economics) ) The process is described graphically in Figure 1. Robert Heilbroner alludes to the feedback loop when he describes the economics of Alfred Marshall. “Marshall was primarily interested in the self-adjusting, self-correcting nature of the economic world. As his brilliant pupil, J. M. Keynes, would later write, he created ‘a whole Copernican system, in which all the elements of the econmic universe are kept in their places by mutual counterpoise and interaction.’ Much of this, of course, had been taught before. Adam Smith, Ricardo, Mill, had all expounded the market system as a feedback mechanism of great complexity and efficiency.” (Heilbroner, 1992) These are just a few examples of the many ways that the term feedback is used to describe the economy. Yet, despite the sincere attempt to accurately describe the economy, a large portion of the economic literature uses the term feedback in a very loose and ambiguous manner. Taking the perspective of traditional control theory, the term feedback is just simply used incorrectly by most economists. Thus, this work presents a simple economic model that uses the term feedback as is defined by traditional control theory and science. The net result is an economic model with no feedback loop at all. The system is self-correcting; but in terms of control theory, there is no feedback loop. Therefore, the economy is an open loop system. The discussion is simplified by starting with the traditional 2 - sector economy: households and firms. The model is depicted by Figure 2. In this basic model, households sell their factors of production to firms and are paid. The incomes that households are paid are then used to buy the goods and services that the factors of production produce – consumption. A portion of income is saved; thus, a portion of goods and services go into inventory. Firms then borrow the money that households save and buy up the goods and services that go into inventory, converting that output into investment. Figure 3 is a typical liquid pumping system used in traditional engineering applications. The 2 – sector model is adapted to fit this simple pumping scheme. The analogy between the 2 – sector economy and the liquid pumping system is carried out in the following way: Pump – A is seen as the economic engine that generates income. A pump in engineering terms is used to provide a volumetric flow rate. The analog to flow is income. A pump is designed to deliver a specified volumetric flow rate for a given differential pressure across the pump: ΔP = P1 – Po, where P1 is the discharge pressure of the pump, and Po is the suction pressure of the pump (Po is defined as atmospheric pressure to simply the calculations). An idealized pumping curve is presented in Figure 4. Note that differential pressure is typically converted into pump head by the following equation: ΔP = ρ g hp. It is important to point out that when the pump head is equal to zero, the pump delivers a maximum flow rate; and when a cap is placed on the pump discharge, the maximum differential pressure is attained and flow is reduced to zero. As the flow exits the pump, it then goes through a flow element: a venture meter, known for exhibiting nearly no pressure losses. The flow element measures the total flow out of the pump, QT, and communicates this to a flow transmitter/controller. The flow transmitter/controller is designed to manipulate the flow control valve such that Q2 is a specified fraction of QT. The signal from the flow element is interpreted and the flow transmitter/controller then releases a signal indicative of the desired flow rate Q2. The signal is sent to the electric-to-pneumatic converter, which then sends a pneumatic signal to the flow control valve. The strength of the pneumatic signal is such that the valve is positioned to deliver the desired flow. The analog to Q2 is consumption. For the moment, autonomous consumption is assumed to be zero. Therefore, Q2 = mpc QT. Note that the control loop between FE and FCV is the only control loop in this 2 – sector economy. It is a feed forward loop. This is required such that Q2 is set to a specified fraction of QT (analogously, such that consumption is set to a specified fraction of income). Aside from this loop, there are no other loops, and it is certainly not the feedback loop mentioned so often in the literature. Q1 flows to an inventory tank. The tank has no outlet, so all the flow is stored. As the liquid level in the tank rises, pressure is placed on the pump discharge due to the height of the liquid. At first, the tank is empty, so flow is at a maximum; but as the liquid level rises in the tank, the differential pressure across the pump increases and the flow slowly diminishes (see Figure 4). The analog to Q1 is savings. Savings translate into goods and services that are not purchased nor consumed. Thus, inventories are constantly rising in the tank. If an outlet to the tank did exist, this would be analogous to investment; however, at the present time no investment is applied. The analysis starts at the branch point: pressure at the branch point is equal to the pump discharge pressure. The flow from the branch point to the liquid level of the tank is analyzed using the Bernoulli Equation: Applying Equation 1 to Figure 3, a simplified relationship is derived.
An Assessment of Firefighters' Stress Levels
Dr. Bill Lowe, Nova Southeastern University, Ft. Lauderdale FL
The purpose of this evaluative research was to (1) assess the stress levels of firefighters. (2) Identify the impacts of stress. (3) Review departmental resources for addressing stress. And (4) identify strategies for reducing the causes of firefighter stress. The study’s four research questions were: (1) Are firefighters experiencing low, moderate, or high stress levels? (2) What are the impacts of firefighters’ stress levels? (3) What are the departmental resources for addressing stress related issues? (4) What are strategies applicable to reducing the causes of stress experienced by firefighters? The procedures for answering the research questions included a literature review and a survey. Results established that the firefighters and officers completing the Stressor Questionnaire self-reported the following stress levels: four respondents (7.7 %) reported low levels of stress; 48 respondents (92.3 %) reported moderate levels of stress; and no respondents (0 %) reported high levels of stress. The study findings were helpful in the development of recommendation for the department’s efforts to identify and address work place stress issues. Recommendations for the department to consider included the following: (1) Mandate psychological testing as an additional requirement for promotional exams. (2) Reassess the current status the department’s peer counselor program. And (3) extend departmental stress management courses and services to firefighters’ immediate family members. The studied fire department in 2002 embarked upon an organizational transformation completely redesigning how the department provides its comprehensive emergency responses services. The outcome of the fire chief’s command directive was the abolishment of a 20-year history of operating two separate chains of command for shift operations: fire suppression and emergency medical services. The department's fire chief’s intention was to accomplish the department’s goal of a unified chain of command” for all line operations. Virtually all of the department’s line and staff job descriptions were completely revised, challenging personnel to expand their job duties and skill sets. Concurrently with the reorganization, there have been many retirements as employees hired 25-30 years ago are beginning to reach retirement age. The impact has been the loss of intellectual capital and emotional maturity that is not easily replaced. As veteran firefighters and officers retire, younger personnel are being promoted to fill the positions bringing passion, fresh approaches, and increased stress levels, as acknowledged by the departmental chaplain, to addressing operational issues and concerns. The potential stress consequences are magnified as these newly promoted officers strive to acquire leadership skills to master their new promotional duties while their job duties and expectations are constantly changing. The problem is that the department has had incidents whereby personnel failed to adapt to the organizational transition and they frequently cite stress as a serious concern in their career and personal lives. In several incidents, paramedics and emergency medical technicians have voluntarily surrendered their State of Georgia or National certifications citing the impact of stress. This resulted in these personnel being greatly restrained in the types of apparatus and levels of authority they possess at emergency medical incidents. In other cases, personnel have been involved in on-duty or off-duty disciplinary incidents they attributed to stress when confronted with their behaviors. Finally, other personnel have reported an array of medical conditions they - and their physicians – attributed to high stress levels. A literature review, survey, and personal interviews will be used to answer the following research questions: Are the firefighters experiencing low, moderate, or high stress levels? What are the impacts of firefighters’ stress levels? What are the department’s resources for addressing stress related issues? What are some strategies applicable to reducing the causes of stress experienced by the firefighters? The department's emergency medical services' division was initially developed with firefighters being trained as paramedics and then assigning them to an advanced life support ambulance. Eventually, and gradually, the initial excitement of being a firefighter/paramedic waned as the reality of high call volume and sleepless shifts responding to medical alarms set in. As a result of many incrementally implemented policies and procedures over the years, the department eventually operated two separate chains of command for line operations: EMS and fire suppression. In March 2002, the newly appointed fire chief issued a command directive placing all fire and EMS personnel and response authority under a single chain of command. Fire suppression personnel found themselves tasked with multiple EMS duties, and EMS personnel were challenged to provide more traditional fire suppression duties. Each shift the integration becomes more familiar as personnel attend more training, respond to more alarms, and serve the public by delivering quality emergency services. However, there have seen some incidents whereby personnel failed to adapt to the organizational transition and they frequently cite stress as a serious concern in their career and personal lives. In several incidents, paramedics and emergency medical technicians have voluntarily surrendered their State of Georgia or National certifications citing the impact of stress. This resulted in these personnel being greatly restrained in the types of apparatus and levels of authority they possess at emergency medical incidents. In other cases, personnel have been involved in on-duty or off-duty disciplinary they attributed to stress when confronted with their behaviors. Finally, other personnel have reported an array of medical conditions they - and their physicians – attributed to high stress levels. The department's chaplain recently acknowledged that stress levels are rising in the department: As the department grows, it becomes more complex and stressful. A major adjustment in your job has been the shift in personnel. Retirements, promotions, and recruits coming on line, have changed the face of the department in dramatic ways. Here is a quick refresher on post-traumatic stress disorder . . . As station officers, it falls within your responsibility to evaluate your personnel on a regular basis. It has been my privilege to witness the care and compassion you express toward the men and women of your station. If I may be of any service to you please do not hesitate to call.
Aging of the U.S. Population and Its Impact on the Health Care System
Dr. Kristina L. Guo, MPH, University of Hawaii-West Oahu, Pearl City, HI
The purpose of this paper is to describe the impact of seniors on the health care system in the 21st century. The aging of the population poses a major challenge to the acute and long term care system. In 2005, 12.4% (36.6 million) of the population was 65 and over. However, this is expected to double in size within the next 25 years. By 2030, almost 1-out-of-5 Americans — some 72 million people — will be 65 years or older. By 2050, the oldest old are projected to account for 1 out of 4 older adults. Therefore, the impact of the aging population will result in a heavy burden on the Medicare program. Medicare is a federal program that primarily finances personal medical services for the aged. In 2005, there were 42.1 million Medicare beneficiaries, and this projected to reach 77.2 million by 2030. As the aged population increases and lives longer, the potential for requiring long term care services also rises. Those aged who are impaired will demand services to assist in their activities of daily living; thus, the cost of providing those services will escalate. Currently, 12% of the population between 64 and 74 require long term care services, compared to almost 70% for the 85 and over age group. The need for nursing home care increases with age. However, Medicare does not pay for long term care services. As a result, this paper argues that changes in the health care system, and especially that of the Medicare program and the long term care system are necessary to provide more adequate benefits and services for the growing aging population in the U.S. There are several major trends in the U.S. health care system which influence the health of the American public and the delivery system. Trends such as the use of managed care as the dominant delivery system, aging of the population, increased focus on chronic illnesses, the growth of Medicare and Medicaid, and advances in information and medical technology, all play a major role in directing legislative and regulatory reforms. In addition, changes in consumer awareness and education, concerns of payers about costs and quality, and the structure of health care organizations, i.e, hospitals and providers, directly impact the provision, financing and delivery of the health care system. Although there are several key trends, the purpose of this paper is to identify and discuss one major trend: the growing population of elders and their increased demand for medical services. Seniors heavily rely on the Medicare program to finance their health care. Furthermore, this paper will address the overall impact of the aged on the delivery system, such as the changing nature of health care providers to meet the needs of seniors and the demand of health care workforce. The purpose of this paper is to describe the impact of seniors on the health care system in the 21st century. A full understanding of this trend is vital for various stakeholders, including consumers, policy makers, health care providers, payers, and researchers. It is especially useful to aid in the long term planning processes. The U.S. population is getting older and living longer. By 2010, the average life expectancy will be up to 86 years of age for females and 76 years for males. There will be more than 100,000 people over the age of 100 by the year 2010 (Institute for the Future, 2003). The aging of the population poses a major challenge to the acute and long term care system. Nursing homes, home health care services, and other adult care facilities will become an increasingly important component of the health care system as they grow in number, size and complexity. Furthermore, the diverse number of health care workforce needed to staff these facilities will be growing in order to provide a continuum of services required for more complex care of aging patients. The growth of the older population has outpaced that of the total population in the 20th century. According to Figure 1, in 1900, there were 3.1 million aged 65 and over. By 2000, this group consisted of 35 million. This was more than a 10 times increase, compared to that of the total U.S. population which increased from 76 million to 284 million, a 3.7 times increase (He, Sengupta, et al., 2005). Figure 2 shows that in 2005, 12.4% (36.6 million) of the population was 65 and over. However, this is expected to double in size within the next 25 years. By 2030, almost 1-out-of-5 Americans — some 72 million people — will be 65 years or older. It is also important to note that there are wide differences in service (health care, housing, and assistance) needs between healthy 65 year olds and frail 90 year olds. Figure3 highlights the oldest old, those aged 85 years and older. They comprise of a small but rapidly growing group within the older population. In 1900, there were only 122,000 people in this category. By 2000, this group has reached 4.2 million, or has become 34 times as large. The rapid growth of the oldest old is related to increased life expectancy and decreased morbidity and mortality rates. In addition, lower birth rates have also contributed to the proportion of the oldest old among the total older population. Figure 5 illustrates the projected increase in the 65 and over population. During the first decade in the 21st century, the older population will grow at a similar rate as that of the 20th century, that is, those 65 and over will outpace that of the total population growth, the major contributor being the first of the baby boomers reaching 65 by 2011. A rapid increase in the older population from 2011 to 2030 results, and by 2040, 80 million will be 65 and over.
Change and Continuity in e-Commerce Degree Programs in North America
Dr. Subhash Durlabhji, Northwestern State University of Louisiana, LA
Dr. Marcelline Fusilier, Northwestern State University of Louisiana, LA
The present study built on previous research to investigate characteristics of e-commerce master’s degree programs that were newly launched, revised, or remained the same between 2003 and 2007. Data were collected from university web sites. Of 90 total programs, 53 were new, 32 were revised, and five remained the same over the period studied. Findings suggested the coursework of all the programs tended to be non-technical in content. The non-technical focus appeared more pronounced in e-commerce concentrations than degree programs. Comparisons to previous literature suggest that the rate of curriculum change may be increasing for e-commerce master’s programs. E-commerce has expanded steadily in the years following the dot-com bust. Adjusted e-commerce retail sales are up 19.3% in the third quarter of 2007 over the same period for 2006 (U.S.Census Bureau, 2007). The 2007 online holiday shopping season’s sales surpassed $29 billion, up 19% from 2006 (Lipsman, 2008). Parallel to this growth, e-commerce education offerings have expanded and changed (Durlabhji & Fusilier, 2002; 2005; Ethridge, Hsu, & Wilson, 2001; Hemaida, Foroughi, & Derr, 2002). Education is essential for e-commerce to reach its potential. Bharadwaj and Soni (2007) reported that lack of knowledgeable, qualified personnel is a top barrier facing small business entry to e-commerce. Development of e-commerce education programs is a challenge in part because the field is changing rapidly. Burkey (2007) investigated 21 e-commerce programs from 2001 to 2005, finding 76% of them revised during that period. Based on an examination of 107 e-commerce master’s programs, Durlabhji and Fusilier (2005) reported that 39% were revised between 2001 and 2003. Thirty-six percent of the programs were newly launched during that period. There is lack of agreement concerning the extent to which e-commerce and more generally, master’s level business curricula, should emphasize technology as opposed to functional business areas. Recent literature on MBA programs suggested that the curricula of many schools place increasing emphasis on non-technical coursework (Bisoux, 2007; Fisher, 2007). The Corporate Recruiters Survey (2007) conducted by the Graduate Management Admission Council found that hiring decisions primarily focus on candidates’ interpersonal skills and fit with the company’s culture. Only 31% of the respondents rated technical skills as extremely important, far less than the 63% who rated interpersonal skills similarly. Results of a survey of full time MBA students’ satisfaction with core curriculum subjects revealed that the only technical subject on the list, information systems, was ranked at the bottom (Global MBA Graduate Survey, 2007). Students ranked “knowledge of technology” as the skill that was least improved during their MBA program. With regard to e-commerce programs, Burkey (2007) concluded that the programs sampled had a non-technical emphasis in both 2001 and 2005. Gueldenzoph (2006) and Ragothaman, Lavin, and Davies (2007) surveyed educators, employers, and practitioners regarding their perceptions of e-commerce topics for business education. Findings revealed a consistent emphasis on the importance of non-technical course topics such as ethics, advertising, and interpersonal skills. Employers and educators in the Gueldenzoph (2006) study indicated on average that technical topics such as SML (standard markup language) were not even needed in e-commerce coursework. This approach appears contrary to the recommendations of a number of authors who advocate taking a multidisciplinary view of e-commerce and a balance of technical and non-technical course material (Gunasekaran, McGaughey, & McNeil, 2004; Ngai, Gunasekaran, & Harris, 2005). Dhar and Sundararjan (2007) noted a gap between the generally perceived importance of information technology and its presence in business school curricula. An analysis of industry needs revealed that technical topics such as specialized software applications, wireless, and new technologies were not sufficiently covered in e-commerce curricula (Davis, Siau, & Dhenuvakonda, 2003). Celsi and Wolfinbarger (2001) recommended that business schools develop managers who understand the convergence between IT and business strategy. The present study addresses the changing state of e-commerce education. Based on previous research, e-commerce master’s degree programs that existed in 2003 were compared to those in 2007. The following research questions were investigated. 1. From 2003 to 2007, how many degree programs were newly launched, revised, or remained the same? 2. Durlabhji and Fusilier (2005) found two broad types of programs, (a) e-commerce concentrations in master’s programs and (b) master’s degrees in e-commerce. Programs resulting in an e-commerce degree require more coursework in the major area than do e-commerce concentrations. The present study investigates whether program type, i.e., degree or concentration, is associated with program introduction or revision. 3. What is the technical versus non-technical course composition of the programs? Decisions concerning e-business curricula and program revision and launch can have broad and lasting implications for university stakeholders. Considerable funds and faculty time are spent on revising existing programs and developing new ones. Program decisions that overemphasize non-technical coursework may not meet industry needs. This could cause difficulty for the program’s graduates in finding and keeping employment, thus tarnishing the image of a school offering the program. The present study explored the prevalence and nature of program revision with a larger sample of e-commerce degree programs than has been used in previous research (Burkey, 2007).
The Impact of Fast Adaptation Strategy and Knowledge Integration from New Product Successes and Failures on New Product Development Performance: An Empirical Study of ICT Industry in Taiwan
Dr. Yung-Ching Ho, National Chung Cheng University, Taiwan, R.O.C.
Yu Chao, National Chiao Tung University, and Chung Hua University, Taiwan, R.O.C.
Hui-Chen Fang, National Chung Cheng University, Taiwan, R.O.C.
There is consensus in the marketing literature that fast adaptation strategy is fundamental resource for successful new product development (NPD). However, few studies examine the dimensions or characteristics of fast adaptation strategy and how and why this resource influence new product development performance. The successful strategy of new product development is regarded as benchmark that companies drive to grow and maintenance long-term competitive advantage. Along the fast development of Information and Communication Technology (ICT), knowledge economy as foundation by the ICT industry, becomes mainstream of world development. The knowledge gained from NPD failure is often instrumental in achieving subsequent successes. However many studies in the NPD, often neglects the discussion of the knowledge management in comparing the successful and failure projects. This study explores the fast adaptation strategy and the knowledge integration, analyzes the effect of fast adaptation strategy on new product performance through knowledge integration. The results show four conclusions: First, fast adaptation strategy influences NPD performance. Second, Fast adaptation strategy influences knowledge integration. Third, Knowledge integration influences NPD performance. Forth, Fast adaptation strategy influences NPD performance through knowledge integration. New product development (NPD) is central to business prosperity (Womack, Jones, and Roos, 1990; Dougherty, 1992; Brown and Eisenhardt, 1995; Eisenhardt and Tabrizi, 1995). However, new product success remains an elusive goal for many firms (Cooper 1994). The distinction emerges in recent year; the marketing literature has established that NPD performance is enhanced by knowledge integration (Ruekert and Walker 1987; Madhavan and Grover 1998; Maltz and Kohli 2000; De Luca and Atuahene-Gima 2007). For survival and growth, enterprises need to persistently develop successful products. In recent decades, new product competition has changed significantly. Enterprises have come to realize that traditional standards like high product quality, low costs, and differentiation are not enough to guarantee the success of new products (Balbontin, Yazdani, Cooper, and Souder 1999). In most industries, successful development and commercialization of new products are the foundation of a company’s survival and growth (Calantone, Schmidt, and Song 1996). In other words, new products represent a hidden source of competitive advantage (Song and Motoya-Weiss 2001). Over the years, fast adaptation strategy and knowledge integration have emerged as an issue receiving a lot of attention around the world. But how can an enterprise identify and measure its fast adaptation strategy and knowledge integration, and how can they associate with new product development performance? These questions have emerged as points of serious attention. There is little or no insight into the relative importance of the different dimensions of fast adaptation strategy as drivers of NPD performance. Along the fast development of Information and Communication Technology (ICT), knowledge economy as foundation by the ICT industry, becomes mainstream of world development. The knowledge gained from NPD failure is often instrumental in achieving subsequent successes. However many studies in the NPD, often neglects the discussion of the knowledge management in comparing the successful and failure projects. This study explores a company’s fast adaptation strategy and its effect on knowledge integration during new product development processes, analyzes the effect of fast adaptation strategy and knowledge integration on new product performance. By integrating knowledge during the new product development process, it is possible to further determine and measure the fast adaptation strategy in new product development performance. There is consensus in the marketing literature that fast adaptation strategy is fundamental resource for successful new product development. However, few studies examine the dimensions or characteristics of fast adaptation strategy and how and why this resource influence new product development performance. The successful strategy of new product development is regarded as benchmark that companies drive to grow and maintenance long-term competitive advantage. Over time, in every firm, NPD projects are undertaken for different reasons (Crawford 1980; Kuczmarski 1992; Griffin and Page 1996). In a competitive global market, new product may have to be time-to-market to retain customers and arrest margin erosion. In recent years, fast adaptation strategy has become a pivotal, strategy competence for many firms (Eisenhardt, 1989; Stalk and Hout 1990). The same theme of fast pace has become key in NPD. The importance of NPD to success is compelling (De Luca and Atuahene-Gima 2007). How do firms develop products quickly and successfully? Some previous researches provide some insights. There are compressing strategy and experiential strategy. A distinction emerges in the initial theorizing about rapid product development. One approach, the compression strategy, drawing from much of the existing product development literature surveyed (Rosenau 1988; Womack, Jones and Roos 1990; Stalk and Hout 1990; Millson, Raj, and Wilemon, 1992; Eisenhardt and Tabrizi 1995). It assumes that product development is a predictable series of steps that can be compressed. Compression strategy involves planning product development steps (Gupta and Wilemon, 1990; Womack, Jones and Roos, 1990; Eisenhardt and Tabrizi 1995), Simplifying product development through supplier involvement (Imai, Nonaka, and Takeuchi, 1985; Clark and Fujimoto, 1991; Eisenhardt and Tabrizi 1995), shortening the time to complete each steps in the product development process (Rosenau 1988; Stalk and Hout 1990; Cordero 1991; Eisenhardt and Tabrizi 1995), overlapping development steps (Stalk and Hout, 1990; Clark and Fujimoto 1991; Eisenhardt and Tabrizi 1995; Rewarding designers for speed (Gold 1987; Eisenhardt and Tabrizi 1995). Compression strategy involves rationalizing the steps of the product development process and then squeezing or compressing them together (Eisenhardt and Tabrizi 1995). The basic ideas of experiential strategy are found in a variety of fields, including improvisation (Bastien and Hostager, 1988; Weick 1993; Moorman and Miner 1994; Eisenhardt and Tabrizi 1995), Chemistry and biochemistry (Curtis and Barnes, 1989; Eisenhardt and Tabrizi 1995) neurobiology Levy, 1994; Eisenhardt and Tabrizi 1995), cognitive psychology (Payne, Bettman, and Johnson 1988; Eisenhardt and Tabrizi 1995) and strategic choice (Eisenhardt 1989; Eisenhardt and Zbaracki 1992; Eisenhardt and Tabrizi 1995), and in some product development literature (Quinn 1985; Eisenhardt and Tabrizi 1995). It assumes that product development is a very uncertain path through foggy and shifting markets and technologies (Eisenhardt and Tabrizi 1995). This strategy is more a response to uncertainty than certainty, more iterative than linear, and more experienced-based than planned (Eisenhardt and Tabrizi 1995). Both compression and experiential strategies accelerate product development for time-to-market. This study explores the fast adaptation strategy and its contribution to NPD performance. Therefore, Hypothesis 1 is proposed as follows:
Integrative Factory, Technology, and Product Planning on the Basis of a System Model
Prof. Peter Nyhuis, Institute of Production Systems & Logistics, Leibniz University of Hanover
Serjosha Wulf, Institute of Production Systems & Logistics, Leibniz University of Hanover
Prof. Berend Denkena, Institute of Production Systems & Logistics, Leibniz University of Hanover
Mark Eikötter, Institute of Production Systems & Logistics, Leibniz University of Hanover
Factory, technology, and product planning are complex corporate disciplines with highly mutually influential processes and results. The existing interactions between the disciplines, the various life cycles of the planning elements, and various planning dates lead to a high complexity. This makes it difficult for companies to maintain an overview over the ongoing planning processes. In addition noncentral, nonnetworked planning departments lead to nonsynchronized planning processes, which exacerbate the problems. The challenges for a company consist of mastering the ensuing problems and using the existing resources efficiently. This calls for a synchronization of the strategic and operative planning in terms of both content and timetable. To do this, new methods and models have to be devised which are aimed at a holistic coordination of the three planning areas. One promising approach is based on the classic roadmapping method and extends this to form an integrative planning method for the factory, technology, and product areas. The method enables an assessment of the effects of possible decisions in one area on each of the others right from the strategic corporate planning level. An overall understanding of the existing cause-effect relationships between the three areas is necessary for this. A cooperative project between the Institute of Production Systems & Logistics and the Institute of Production Engineering & Machine Tools has developed an initial approach to a holistic influence and assessment model. High market transparency and an increasing dynamic in the business environment have continually intensified the competitive conditions of manufacturing companies in recent years. As a result, companies in high-wage locations such as Europe are losing more and more market share, especially in the mass production sector (Westkämper, 2006b). The increasing demands of customers for products with an individual configuration leads to an increasing number of variants and, simultaneously, falling batch sizes. Unceasing product and process innovations are the only way to compensate for the inherent disadvantages of such locations. As a result, existing manufacturing locations must adapt to changing market demands in ever shorter cycles (Westkämper, 2006a; and Wiendahl, 2006). Factory, technology or product planning are among the corporate areas affected. Coordination between these is extremely important, especially with respect to the disparate life cycles of the objects analyzed. But up until now this has been difficult in practice. In the past, coordination between the individual planning departments was simpler because the life cycles of the individual planning objects were virtually identical. In the meantime, however, the life cycles are almost contradictory. Whereas in the past a factory manufactured just one product throughout its life cycle, frequent product changes are taking place these days (Wiendahl, 2007). Furthermore, the operational resources available must be replaced at ever more frequent intervals due to shortening technology life cycles (Denkena, 2005b). Owing to structural and spatial restrictions, the factory is frequently not set up to deal with this. The possible consequences manifest themselves in the form of, for example, high stocks, high throughput times or unfavorable materials flows. In order to combat this, factories have been designed with transformability in mind for some years. Transformability is defined as the capacity to adapt factory objects at short notice and specifically with minimum effort and also beyond the predefined limits (Wiendahl et al., 2007; Nyhuis, 2005; and Dashchenko, 2006). Businesses react to increasing uncertainty in the markets through the design of transformable factories (Heger, 2007). Developments in the business environment and the ensuing internal control measures generally represent unknown variables. Factories are therefore designed in such a way that they can be adapted – transformed to suit new framework conditions with minimum expenditure and without interrupting everyday operations. On the other hand, by employing a coordinated planning procedure, integrative factory, technology, and product planning tries to avoid, as far as possible, the need for change due to the introduction of new products or technologies. Unknown variables such as future demands on the building and the structure are coordinated and considered in advance through appropriate coordination. And vice versa, the performance profile of an existing or planned factory can be integrated into the product and technology planning as an input. Performance deficits, e.g. an unsuitable logistics concept, shortcomings in lifting gear or existing potential, can be uncovered right at the strategic development stage and, if necessary, measures can be initiated. A more intensive interweaving of the planning areas at both strategic and operative levels can therefore help to secure a competitive advantage, also in high-wage locations. To do this, it is not sufficient just to be aware of the development tendencies within the individual planning fields; instead, knowledge of the interdependencies or interfaces between these is of prime importance (see Fig. 1). For example, there is a relationship between the material of a product and the production technology that could be used to produce it. Once a material has been chosen during product development, this implies a preliminary decision regarding the production technologies that could be employed. The ensuing effects on the technology and factory planning are, however, frequently neglected. For instance, processing a specific material may not be worthwhile from the technology planning viewpoint because, for example, additional machines may have to be procured or new, more complex process chains would arise (Denkena, 2005a). There are corresponding effects on the factory planning, e.g. the ensuing introduction of a new manufacturing or logistics concept. However, up until now the majority of relationships have neither been identified nor have the interactions been systematically investigated. In the majority of companies there is effectively no link between factory, product, and technology (FTP) planning in terms of timetable, content, and organization. There are many reasons for this “missing link”. For instance, the opportunities for synchronized planning fields and the risks of disjointed fields are frequently underestimated. Many companies are afraid that the cost of coordination and synchronization would be too high. In addition, competency conflicts between the planning departments lead to inadequate coordination. However, the main reason is the lack of a systematic and manageable method permitting factory, technology, and product planning to be coordinated systematically during strategic and also operative planning phases (Fiebig, 2004). In times of changing life cycles for factories, technologies, and products, it is no longer sufficient to carry out planning that focuses on individual areas. It is obvious that competitive advantages cannot be realized through the optimization of one planning field alone, but rather through the effective and efficient configuration of the planning sequences considered as a whole. There must be a continuous exchange of information between the respective areas because this is the only way to minimize the cost of planning work, to take interactions into account, and to improve the quality of planning. To overcome this problem, a whole series of different planning approaches has been developed over the years. For example, simultaneous engineering (Halevi, 2001), cooperative product engineering (Wiendahl, 2000; and Gausemeier, 2001), the technology calendar (Schuh, 2004), or roadmapping, to name just a few. Roadmapping is a creative method for the prognosis, analysis, and visualization of future development paths, e.g. for products and technologies (Eversheim et al., 2003; Gindy, 2006; Phaal, 2003; and Specht, 2000).
Insurance Demand, Financial Development, and Economic Growth: The Case of Taiwan
Min-Sun Horng, National Kaohsiung First University of Science and Technology, Taiwan
Yung-Wang Chang, National Kaohsiung First University of Science and Meiho Institute of Technology.
Ting-Yi Wu, Kao Yuan University, Taiwan
This study examines the dynamic relationship among insurance demand, financial development, and economic growth in Taiwan from 1961 to 2006. Using a three-variable VAR (vector autoregressive) model, the competing hypotheses of demand-following versus supply-leading are empirically tested. We find that the economic growth affects the insurance demand in both the long and short run, whereas the financial development (measured as the ratio of M2 to GDP) causes variations in the insurance demand mainly in the long run. Additionally, the results from Granger causality test, based on vector error-correction models (VECM), suggest unidirectional causality running from financial development to economic growth. This result supports “the supply-leading hypothesis” link from financial development to economic growth for Taiwan. In contrast, the empirical results also suggest that economic growth lead to increases in insurance demand. This result supports “the demand-following hypothesis” link from economic growth to insurance demand for Taiwan. These finings highlight the importance of financial development in Taiwan’s recent growth. In other word, financial development does promote real GDP growth. Furthermore, in Taiwan, an increase (decrease) in real GDP leads to an increase (decrease) in real insurance demand. Managerial implications are then identified based on the empirical findings. Economic globalization and internet communication have accelerated the integration of world financial markets over the last two decades. From an economic viewpoint, traditional growth theory suggests that though technological development and investment can help economic growth for a nation. However, modern researches indicate that financial services, including banking and insurance, have substantial potential for spreading positive externalities throughout the commercial sector of an economy. Recent theoretical models examined the causal relationship between financial development and economic growth. For instance, “the supply-leading hypothesis” promotes the link from financial development to economic growth. The “demand-following hypothesis” supports the link from economic growth to financial development. (1) Otherwise, some researchers are interested in understanding the determinants of the insurance demand and how it affects general economic development. Moreover, few articles investigated the relationship between insurance demand and financial develop. Hussels et al. (2005) reviewed the related literatures and presented a new concept of the link among insurance demand, financial development, and economic growth. However, the empirical test on the conceptual link among them is not performed to assess the equilibrium relationship and causal relationship of the three variables. This study extends the work of Hussels et al. (2005) by empirically testing the cointegrating and causal relationships of these variables on Taiwan over the period 1961 - 2006. Using a three-variable VAR (vector autoregressive) model, the competing hypotheses of demand-following versus supply-leading are empirically tested. Particularly, Taiwan has a fascinating economy to examine the hypotheses. Over the past 50 years, Taiwan has developed an economic development miracle with a turbulent relationship of international politics and world economic crises (e.g., the military crisis across the Taiwan Strait due to the threat from China in 1995 and Asia’s financial crisis in 1997). Furthermore, Sigma (1999) reported that the insurance spending per capita currently in Taiwan is higher than the average in Europe. Table 1 displays that Taiwan has the highest premium penetration (i.e., premiums as share of GDP) with 14.11 percent in 2005, next followed by South Africa (13.87 percent) and United Kingdom (12.45 percent) respectively. Since the premium penetration in Taiwan (14.11 percent) is about double of the average in the world (7.52 percent) in 2005, it is worth investigating the development factors of insurance market in Taiwan. Another insurance indicator, insurance density, is calculated by dividing direct gross premiums by the population, this represents average insurance spending per capita in a given country. Table 1 also shows that in 2005, the Switzerland ranked first in insurance density with $5,558.4 and the United Kingdom ranked second in insurance density with $4,599. At the same time, the European insurance density was $1,513.8 and the world insurance density was $518.5. As reported in Table 1, in 2005 the Taiwan’s insurance density was $2,145.5, which ranked twentieth in the world. Although the financial industry has grown significantly in Taiwan, researchers have not paid much attention to the empirical assessment of contributions regarding of the insurance sector to Taiwan economy. Therefore, this study aims to answer the following two questions. First, is there a long-run equilibrium relationship among insurance demand, financial development, and economic growth in Taiwan? The result can assess the new concept of the link among these variable proposed by Hussels et al. Second, if a stable long-run relationship exists, what is the direction of a causal relationship between these variables? In other words, do insurance and financial development be the ‘‘engine’’ of Taiwan’s economic growth or the other way around? Additionally, this study attempts to explain the short- and long-term dynamic relationship among insurance demand, financial development and economic growth. Actually, our research result will try to verify the causal relationships among these variables in Taiwan. The remainder of this paper is organized as follows. Section 2 reviews the literature regarding the relationship among insurance demand, financial development, and economic growth. Section 3 describes the empirical estimation and results. Section 4 concludes this article and suggests the future research directions. The financial services industry has experienced a rapid growth over the last few decades, significantly outpacing worldwide economic growth. Expanding the link between growth in the insurance industry and economic growth, Browne et al. (2000), Ward and Zurbruegg (2002), Beck and Webb (2003), and Esho et al. (2004) suggest that economic development and economic stability greatly increase insurance demand. Outreville (1990), Ward and Zurbruegg (2002), and Hussels et al. (2005) also suggest that the insurance industry through risk transfer and financial intermediation can generate positive externalities and economic growth. It is generally assumed that insurance expansion should have a positive contribution to economic growth. As a starting point, Sigma (1999) and Enz (2000) present the “S curve” relationship between economic development and insurance market development. In other words, the increase in insurance demand is closely related to GDP growth, with income elasticity generally greater than one. Furthermore, there is a per-capita income – about US$15,000 for life and US$10,000 for non-life insurance – at which the income elasticity of the demand for insurance reaches a maximum. They found that in countries with higher levels of income per capita, insurance demand becomes less sensitive to income growth. The main reason for this problem is that at high-income consumers become so wealthy that they can afford to retain risks within their current investment and hedge portfolios (Ward and Zurbruegg 2002; Hussels et al., 2005). Besides the economic growth, previous works had suggested some factors which affect the insurance purchase (e.g. Ma and Pope, 2003; Esho et al. 2004). These factors include the income and wealth, the price of insurance, the probability of loss, and the degree of risk aversion, where the income level is hypothesized to positively affect insurance demand. However, they do not sufficiently explain the causality between insurance demand and GDP.
Apple's iPhone Launch: A Case Study in Effective Marketing
Kyle Mickalowski, Augustana College, Sioux Falls, SD
Mark Mickelson, Augustana College, Sioux Falls, SD
Jaciel Keltgen, Augustana College, Sioux Falls, SD
When CEO Steve Jobs announced in January 2007 that Apple would be releasing a revolutionary iPhone five months hence, consumers waited with bated breath for a phone that would deliver all the features of their iPods as well as a smart phone. Anticipation grew, just as Jobs knew it would, as June approached. The launch would become one of the most heralded technological product splashes Apple, known for its masterful media build-up, had ever planned. How the iPhone was developed, priced, promoted, and distributed is lesson for marketers around the world. Apple investors were pretty happy with the outcome as well. One year after Apple Inc. CEO Steve Jobs announced the company’s industry-changing iPhone on January 9, 2007, at the Macworld convention in San Francisco, the share price of Apple’s stock has more than doubled to a January 9, 2008, value of $179.40 (See Chart 1). This stock price incorporates all of Apple’s business, but a large part of the rise in value can be attributed to the launch of the cutting-edge iPhone, of which four million have already been sold through mid-January 2008 (Carew, 2008). Based on this simple observation of the stock price, the iPhone can so far be declared a success, at least from a shareholder standpoint. This paper will explore both the pre- and post-launch activities surrounding the iPhone to explain why it was such a success for the stockholders and why Apple’s reputation for unparalleled marketing success is deserved. Jobs' announcement was an example of the intelligent use of trade shows and Apple's experience with generating press coverage and buzz about new products through them. The conference capped off the two-year development period for the iPhone, a period during which Jobs embarked on a campaign to sign a wireless company as the exclusive carrier for the iPhone. Eventually, he was able to convince AT&T to abandon almost all control over the development of the iPhone to the point where only three executives at AT&T had seen the iPhone before it was announced (Sharma, Wingfield, and Yuan, 2007). This situation gave Apple the liberty to develop its product on its own terms and to keep its features under tight wraps. In an industry that changes as rapidly as the wireless communication industry, the ability to be as autonomous and secretive as possible is very important in the development of a product like the iPhone, and Steve Jobs was able to recognize and use it to Apple’s advantage. The iPhone could be described as a combination of Apple’s popular iPod music player and a smart phone designed to surf the Web. Its highly-touted feature is a 3.5-inch, touch-sensitive screen that consumers use to make calls, navigate their music collection, and write messages on a virtual onscreen keyboard (Wingfield and Yuan, 2007). At the time of the announcement this innovative feature set the iPhone apart from the competition in the wireless-phone market. Apple parlayed the strong reputation of the Apple brand and the iPod’s success to enter a lucrative cell-phone market, a step that may ward off a potential threat to Apple as other companies introduce devices that have strong music-storing and playback capabilities. All of these benefits and features of the iPhone come for a price though; the initial price of the 4GB model was $499 and the 8GB model cost $599. Aimed at the high-end, tech-savvy consumer, who is often a business user, the iPhone is marketed to a sizable, fast-growing market. Before the recent fears of a pending recession, analysts predicted that the aim to sell 10 million iPhones through 2008 would be an attainable goal (Yuan and Bryan-Low, 2007). In addition to the hefty price tag, iPhone customers are required to commit to a two-year wireless agreement with AT&T Inc. to make calls or use the phone’s other features. (One caveat: owners may choose to use the phone as an iPod, in which case they do not need to activate the device through AT&T.) This set-up creates some unique difficulties that Apple and AT&T will have to address. Any potential customer of the iPhone must be prepared to sign a contract with AT&T as their service provider. People who do not like AT&T’s service or are not in an area where it is provided may be hesitant to purchase an iPhone, which narrows the potential market. The two-year wireless agreement may also be a deterrent for those people who are already locked into a wireless contract with a different provider, but at least one study reported that 12 percent of respondents indicated they postponed their wireless phone or MP3 player purchase to wait for the release of the iPhone, evidencing that this obstacle can be overcome (Sharma and Wingfield, 2007). This contract also means that Apple does not have to deal with network problems and all of the consumer complaints that often go with them, but instead focus on a top-notch hardware and software design. AT&T is not the only company that stands to benefit from the production of the iPhone. The companies that supply the parts and assemble the iPhone, many of which are speculated to be Taiwanese, may enjoy financial success as well. By hiring overseas manufacturing specialists to make the iPhone, both Apple and the suppliers win. The suppliers are able to benefit through the revenue generated by increased business, and Apple is freed from running complicated, labor-intensive manufacturing operations (Dean and Piling, 2007). Additionally, third-party companies who produce accessories for the iPhone stand to profit from its introduction as customers will pay a premium to protect and show off their new investment. Apple also struck deals with Viacom, Disney, Google and Yahoo, all strategically selected to bring internet features to the iPhone. Although primarily highlighted in iPhone TV ads that show internet search features (google) or the ability to view movies such as Pirates of the Caribbean (Disney), Apple sagely chose visible and powerful partners for the iPhone.
The Marketing Concept Implementation, Does it Affect Organizational Culture?
Dr. Richard Murphy, Jacksonville University, FL
Dr. Diana Peaks, Jacksonville University, FL
Dr. John Pope, Jacksonville University, FL
Marketing concept has been defined as a marketing philosophy for achieving the organizations goals dependent upon determining the needs, wants of target markets and delivering the desired needs, and wants more effectively and efficiently than competitors does (Kotler & Amstrong, 2001). According to marketing concept McCarthy & Perreault (1984), organizations implementing the marketing concept can be said to have adopted a market orientation. The work of Kohli & Jaworski (1990) identified the antecedents of a market orientation and the effect of a market orientation on profitability (Naver & Slater, 1990). Some scholars are beginning to stress the relationship between organizational culture and the marketing concept (Deshpande & Parasuraman, 1986). Marketing concept including market orientation and service orientation has been studied since the development of frameworks (Kohli & Jaworski, 1990; Narver & Slater, 1990). Research on the marketing concept has attempted to link market orientation and service orientation to organizational performance, and it has been supported that the marketing concept is highly correlated with performance (Deshpande et al., 1993; Jaworski & Kohli, 1993; Narver & Salter, 1990). Although marketing concept should be an important business philosophy for small organizations as well, marketing concept has not been implemented in International organizations in taking organization size and culture into consideration. This study proposes to examine the influence of organizational size and culture on the implementation of the marketing concept from investigating the International market with focus of small-scale organizations. The study investigated small-scale organizations in Central America. As a result, the profitability of small scale organization depends on an integrated marketing approach including marketing concept, marketing orientation, and service orientation (Dadzie et al, 2002). Market orientation is an implementation technique of the marketing concept and has received a great deal of attention from marketing scholars. This study addresses the small business perspective on the implementation of a marketing concept by drawing on two antecedents of market orientation, organizational size, and culture. Thus adding to the literature on the interface of market orientation and small business. As a result, the development of market orientation is associated with antecedents and performance consequences of the marketing concept (Kohli & Jaworski, 1990). Recently, a new perspective for viewing marketing concept has emerged within the marketing literature. Scholars are beginning to stress the relationship between organizational culture and marketing concept (Kohli & Jaworski, 1990; Narver & Slater, 1990). This study was motivated by finding sparse research on the factors influencing the implementation of the marketing concept by small scale organizations in Central America keeping in mind organizations implementing the marketing concept can be said to have adopted a market orientation, a key to organizational performance. As a result, this study examines the influence of organizational size and culture on the implementation of the marketing concept from the perspective of small-scale organizations in a developing country. The purpose of this investigation is to study the influence of organizational size and culture on the implementation of the marketing concept from the perspective of small-scale organizations in Central America. Investigate relationships between market orientation, the influence of organizational culture on market orientation, and the influence of organizational size on market orientation. As a result, small-scale organizations require an integrated marketing effort and an organization-wide commitment to market orientation to enable successful implementation of the marketing concept/ Research on market orientation and marketing concept has been conducted by Dadzie and Winston (Dadzie et al., 2002; Winston & Dadzie, 2002), however no research has been completed specifically on small organizations or on the factors influencing the implementation of marketing concept. Blankson & Appiah-Adu (1998) conducted a study on the relationship between business strategy, organizational culture, and market orientation but focused on the Central American market. According to marketing concept McCarthy & Parreault (1984), a market-oriented organization is one successfully applying marketing concept. As a result, the rationale for this study was to examine the influence of internal variables to the marketing concept because organizations that implement market orientation will improve their performance (Kohli & Jaworski, 1993). This study on market orientation and organizational culture proposed three questions and hypotheses as follows: RQ1: Do facets of organizational culture subscales as measured by the Organizational Culture and Orientation survey (OCO) and number of employees predict customer market orientation? HO1: There will be no predictors of customer market orientation when the subscales of the OCO and number of employees are used as predictors. RQ2: Do facets of organizational culture subscales as measured by the Organizational Culture and Orientation survey (OCO) and number of employees predict competitor market orientation? HO2: There will be no predictors of competitor market orientation when the subscales of the OCO and number of employees are used as predictors. RQ3: Do facets of organizational culture subscales as measured by the Organizational Culture and Orientation survey (OCO) and number of employees predict interfunctional market orientation? HO3: There will be no predictors of interfunctional market orientation when the subscales of the OCO and number of employees are used as predictors. Organizations constantly seek to improve overall organizational performance. Market orientation, regarded as the implementation of sound marketing concept, has been increasingly been researched in general industry settings. The work of Kohli & Jaworski (1990) identified the antecedents of a market orientation and the effect of a market orientation on profitability (Naver & Slater, 1990). Marketing practitioners are simply expected to accept the marketing concept as the gospel of marketing, with few guidelines or techniques to help facilitate implementation (Kohli & Jaworski, 1990). Some scholars are beginning to stress the relationship between organizational culture and the marketing concept (Deshpande & Parasuraman, 1986). For example, if organizational culture identifies different relationships with market orientation, this information can be used to formulate effective marketing strategies as regards to the culture of the organization. Marketing orientation refers to specific activities of the marketing department. (Kohli & Jaworski, 1993). Market orientation is an organization-wide generation, dissemination, and responsiveness to market intelligence. The work of Kohli & Jaworski (1990) identified the antecedents of marketing orientation. Marketing practitioners were simply expected to accept marketing concept as the gospel of marketing, with few guidelines or techniques to help facilitate implementation (Kohli & Jaworski, 1990). Building on the work of Kohli & Jaworski (1990), this study addresses the role played by organizational culture and size in the implementation of marketing concept. Organizational variables are internal variables such as organizational culture and size and the extent these variables influence the adoption and implementation of the marketing concept. Organizational implementation directly corresponds to Kohli & Jawoski’s (1990) implementation phase (called the market orientation) which is the theory underlying this study. The relationship of these concepts to organizational culture is explored and the chapter examines the adoption of marketing concept and market orientation by small organizations by drawing on the work of other scholars.
Management of Factory Transformability on the Basis of Business Processes
Tobias Heinen, Institute of Production Systems & Logistics, Leibniz University of Hanover
Dr. Detlef Gerst, Institute of Production Systems & Logistics, Leibniz University of Hanover
Prof. Peter Nyhuis, Institute of Production Systems & Logistics, Leibniz University of Hanover
The ability of a factory to change is used these days to present the factory with more security for the future in the turbulent market environment. It is possible to find an answer to turbulence in the environment by modifying machines or extending production areas. Nevertheless, such a purely technology-oriented view does not go far enough because it excludes the players involved in the change, i.e. the factory personnel, who have a considerable influence on the success of an efficient transformation process. It is therefore necessary to add the operations and human resources view to the engineering, and technology view. The Institute of Production Systems & Logistics (IFA) of the Leibniz University of Hanover, Germany, has therefore developed a procedure to describe and manage enhanced transformability in factories. It is based on the derivation of a business process management. Special emphasis is placed on the human resources and organization within the factory, which are hereby incorporated into the transformation of the factory. These days, businesses are exposed to an environment that is changing ever faster (Wiendahl et al., 2007; and Westkämper, 2006). This is felt particularly hard at the central point of value creation in the business, i.e. the factory. The factory is forced to change constantly in order to adjust to the changing framework conditions. The ability of the factory to change permanently, also beyond predefined limits, is discussed in terms of transformability (Wiendahl et al., 2007; and Dashchenko, 2006). Normally, the discussion takes into account only technology-oriented perspectives, which exclude the factory personnel who must implement and sustain the change. This approach is too shortsighted because it is precisely the personnel who determine the success of the transformability to a large extent. The processes necessary for integrating the personnel into an efficient change within the factory can be modeled by means of business processes. Against this background, there first follows an overview of the status of research into transformable factories. After the limits of the technology-oriented discussion of the past have been demonstrated, an approach to socio-technical planning and configuration of the transformability based on business process modeling will be described. The manuscript ends with a conclusion and outlook for future activities. The environment of manufacturing companies in general and factories in particular can be described as turbulent. The reasons for this are, for example, the markets characterized by complexity and dynamism (Seidenschwarz, 2003) in which the rise of direct competition within a short time has led to a drastic decline in market prices. Further reasons are the growing demands placed on businesses with respect to the quality and functionality of products, and the severe changes to the product life cycle curves (Schuh, 2006). These developments are becoming much more transparent through the rapid rise in multimedia options and the ubiquitous availability of information via the Internet (Pümpin and Wunderlin, 2005) – developments that are accelerating in the new millennium (Harigopal, 2006). The context described here provides impulses or reveals a need for action, which can act as a trigger for corporate change (Seidenschwarz, 2003). Many different terms or definitions have been used to describe the ability of a production facility to change, e.g. flexibility (ElMaraghy, 2005; and De Toni and Tonchia, 1998), reconfigurability (Koren et al., 1999; and Cisek, 2005) or transformability (Wiendahl et al., 2007; and Nyhuis et al., 2006). Changeability can therefore be regarded as an umbrella term for different types of change in a factory or a part thereof. Wiendahl et al. (2007) have introduced a taxonomy to distinguish these terms. The differentiation is based on the so-called specification or production levels of a factory. These production levels also relate to a certain product level (see Error! Reference source not found.). On the highest level, a production network is responsible for the entire product portfolio of a company. The ability to alter the portfolio as a whole or to redesign the entire supply chain of which the company is a part is called agility. On the next level down, a single production site or a factory usually produces a product. Transformability describes the ability of a factory to change, e.g. to a new product, by repositioning production areas in the factory. A segment within the factory is responsible for the manufacturing or assembly of a so-called subproduct (e.g. assembly group). Flexibility enables segments to change to new subproducts. Diverse kinds of flexibility have been mentioned such as product, variant, quantity or materials-flow flexibility (Abdel-Malek et al., 2000; Haller, 1999; and Kaluza and Blecker, 2005). Within a segment, there are usually several manufacturing or assembly systems or so-called cells. These perform several operations on a workpiece, e.g. a turning operation or a surface treatment. The ability to adjust to different workpieces, e.g. by adding or omitting certain operations, is called reconfigurability. Finally, on the lowest level, change-over ability describes the ability of a single station within a cell to perform a different, predefined manufacturing or assembly feature on a workpiece. The types of changeability of a factory cannot be looked at separately because the higher levels include the lower ones. A factory needs to possess flexible or reconfigurable elements in order to be transformed. The focus of this paper, however, does not exceed the level of factory transformability. Agility is dealt with on a strategic corporate level and thus surpasses the focus of factory planning. It is not considered further here. A factory is built up of physical and nonphysical objects. Only these possess the capacity to be transformed. These transformation objects can be assigned both to a distinct production level and to one of the three forms: means, organization, and space (Nyhuis et al., 2006; and Nyhuis et al., 2005). Transformability of means refers to the configurability and reconfigurability of operational resources or processes and embraces all technical systems in a factory (Koren et al., 1999). Organizational transformability renders possible the alteration and adaptation of the organizational structures and processes of the factory. Finally, transformability of space places emphasis on the expansion or contraction of the factory, e.g. it allows the factory site to grow or shrink. The transformability objects can be described in more detail via second-level objects; they can also be measured and evaluated (Heger, 2007).
Valuing an Individual Defined Benefit Pension
Dr. C. Patrick Fort, University of Alaska Anchorage, AK
Accountant can play an important role in valuing marital assets during divorce proceedings. Defined benefit pensions, which may be included in those assets, represent a difficult challenge. This paper provides step-by-step model for valuing a defined benefit pension using several Excel financial functions. It is a sad but indisputable fact that a significant number of marriages end in divorce. Accountants may be engaged to value the assets that will be split between the parties. For many couples one of the most valuable assets the husband or wife may possess is a pension. Valuing a defined contribution pension usually involves no more than getting a current statement of the investment accounts. Valuing a defined benefit pension, however, can be quite difficult. The court or the couple may decide to split the benefits from the pension, or the pension may be treated as another marital asset. This paper presents a relatively simple method for valuing a defined benefit pension. Defined benefit pensions usually provide a monthly annuity to retirees and may also include health benefits. The amount of the pension annuity is usually based on a formula that considers years of service and salary levels. The longer the employee works and the higher the ending salary, the greater the pension payments. While defined benefit pensions are becoming increasingly rare in the corporate world, they are still quite common for public employees. Bob, a 45 year-old teacher in the local school district where he has worked for 15 years, is getting divorced. Bob has been married during the full time of his employment. Bob has a defined benefit pension that will pay him 2% of the average of his three highest earnings years for each year of service. His pension also includes medical coverage. Bob is eligible to collect his pension and health benefits at age 55. In determining the value of Bob’s pension, certain questions have to be answered. The first question is what are the relevant cash flows? In Bob’s case there are two: the pension payments, which will go directly to him, and the medical insurance which is paid to the provider. Bob’s pension pays him 2% of the average of his highest three years for each year in the system. Bob has been in the system for 15 years. If the average of Bob’s three highest years is $50,000 his pension will pay him $15,000 per year or $1,250 per month. Bob also has health benefits, and finding an appropriate valuation for those benefits can prove to be a little more challenging. The appropriate cash flows are the plan’s current monthly payments for retirees, although an alternative might be the monthly value of the current employee health benefits assuming they are similar. This amount can be obtained by calling the pension administrator, obtaining a copy of the plan’s actuarial report, or, especially for government employees, searching for the pension web page. Pension plan actuarial reports may also contain other useful information, such as salary and health benefit inflators and model discount rates. For this example assume the actuarial report values retiree health benefits at $800 per month ($9,600 annually) for pre-65 retirees and $350 per month ($4,200 annually) for 65 and older (and, thus Medicare eligible) retirees. Exhibit 1 shows the pension calculation and health benefits. The next question that needs to be answered is how long Bob will live to collect his pension benefits. One way to calculate life expectancy is to use actuarial tables, which can factor gender and race and lifestyle considerations like smoking. A more user friendly method for using these actuarial tables are the various web sites that input several factors, such as diet, education, and family history, to generate a life expectancy. A much simpler method is to use the most current Statistical Abstract of the United States, which shows the life expectancy at different ages and distinguishes between genders and races. According to the 2008 Statistical Abstract, a white male of 45 can expect to live 33.4 more years (U. S. Census Bureau, 2008). Therefore Bob can be expected to live until sometime in his 78th year. Future earnings and benefits must be adjusted for the effects of inflation, and those amounts must be discounted to their present value. In this example different inflation rates have to be used for pension and health benefits, because health costs are expected to increase at a higher rate than inflation (Kaufman and Stein, 2006). The same discount rate, that is, the interest rate used to bring future cash flows to present value, can be used for both. Certain assumptions have to be made regarding which rates to use. A simple and defendable discount rate is the change in the national Consumer Price Index (CPI), which is the national inflation rate. The CPI will also be used as the inflator for Bob’s pension. The Bureau of Labor Statistics (BLS) creates the CPI, and presents it in several forms. Inflation is not a static number; it fluctuates from year to year and over time. The BLS website (http://www.bls.gov/home.htm) has an inflation calculator that converts CPI data into dollar amounts. Exhibit 2 shows how the inflation calculator converts the buying power of one 1980 dollar into $2.52 in 2007.
Introduction of the Business Judgment Rule into Croatian Legislation
Dr. Hana Horak, University of Zagreb, Zagreb, Croatia
Kosjenka Dumancic, University of Zagreb, Zagreb, Croatia
The business judgment rule has been introduced into Croatian legislation by the latest changes of legislation considering the company law. It is being introduced simultaneously with the introduction of the monistic system and it represents continuous harmonization with legal aquis of EU which has been implemented intensively since 1993 when the Commercial Companies Act, which signified a thorough change in the domain of legal entities in the Republic of Croatia and created conditions for market economy, was adopted. In this paper, the business judgment rule has been analyzed and compared with the earlier regulated standard of business diligence as well as with developed legal practice in such area within the US legal system. The paper outlines analysis of existing condition and debates on application and introduction of the business judgment rule into Croatian judiciary practice that, upon expiry of more than one decade, matured sufficiently in a way to introduce monistic (one tier) organization of bodies and all legal rules arising therefrom. In this paper, authors outline the business judgment rule in the Croatian legislation and compare it with the “business judgment rule“ of the American legal system. The legal standard of business judgment rule is introduced into the Commercial Companies Act (Narodne novine – the Official Gazette of the Republic of Croatia, NO. 111/93, 34/99, 52/00, 118/03. 107/07) (hereinafter: CCA) with last amendment (2007), along with already existing standard of business diligence that may be compared with American legal standard „duty of care“. Introduction of the business judgment rule allowed the possibility to members of the management and supervisory boards in dualistic system and management boards in monistic system, to be absolved from liability of making the right business decisions if they acted on an informed basis, in good faith, and in the honest belief that the action taken was in the best interests of the company and, while doing so, did not violate obligations on company’s business managing methods. Thereby, the Croatian company law has continuously been harmonized in accordance with reforms in the EU company law. One of the reasons of introducing the business judgment rule into CCA is giving a choice between monistic and dualistic management systems in joint-stock companies, which was exclusively dualistic prior to latest changes and amendments of CCA. In this way, the progress in development and use of all instruments of corporate management and harmonization of EU rules in national legislation has been achieved. Monistic management system foresees only one management body – the management board. Monistic system is characteristic for Anglo-American territory and is thus accepted in the United States of America, Sweden, Great Britain, Ireland, Spain, Luxembourg and Sweden. Unlike dualistic system, it is characterized by existence of only one body (management board) within which the executive members (executive directors) are distinguished from non-executive members (non-executive directors). Executive members manage daily business activities of the company, while non-executive members supervise executive members of the management board. Dualistic management systems foresee existence of two bodies: supervisory board and management board as a body for managing the company’s business activities and representing the company. Dualistic system is characterized by existence of a separate management board that manages business activities of the company and the supervisory board that performs surveillance over activities of the management board. This system is characteristic for German law and, besides in Germany it is also applied in Austria and in legislations of East-European countries. There is a possibility of a mixed system that gives a choice between monistic and dualistic system and it is being used in Italy, Belgium, France, Portugal and Slovenia. Reforms and application of the business judgment rule are based on market orientated mechanisms in a manner that the management in dualistic, i.e. the management board in dualistic system is given the possibility to make decisions that are beneficial both to shareholders and all investors, i.e. interested stakeholders as an interest-influential group within and around the company (Tipurić, 2006). Questions that arise are numerous and one of them is how this standard will come to life in the judiciary practice of the Republic of Croatia. It should be emphasized that differences in the ownership structure of the company have various consequences in the corporate management. The difference is whether the majority shareholders have governing influence over the management, i.e. the management board, while on the other hand a concentrated ownership structure may cause a problem since interests of majority and minority shareholders are not the same. Croatian law recognizes business diligence that partially corresponds to American standard „duty of care“. As the authors envisage, according to the American authors and literature „duty of care“ and „duty of loyalty“ as traditionally understood, have well defined ambits (Eisenberg, 2006). The duty of care requires a manager who is not self-interested to perform his duties in a manner that he reasonably believes to be in the best interests of the corporation, with a view to enhancing corporate profit and shareholder gain. In that connection, the standard of conduct under the duty of care requires a manager to act reasonably – with due care – in informing himself concerning a proposed decision, and in making decision itself (Eisenberg, 2006). At the same time, if the business judgment rule does anything, it insulates directors from liability for negligence. The business judgment rule does so by providing a presumption that the directors or officers of a corporation acted on an informed basis, in good faith, and in the honest belief that the action taken was in the best interests of the company. As a result, even clear mistakes of judgment will not result in personal liability (Bainbridge). As a result of the business judgment rule, the duty of care protected shareholders only against extreme cases of managerial incompetence. This standard was changed in the decision in Smith v. Van Gorkom. In Van Gorkom, the Delaware Supreme Court held that the directors breached their duty of care by failing „to inform themselves of all information reasonably available to them“ and which may have led to further shareholder gains in connection with a sale that admittedly netted shareholders and substantial premium over market prices. (Lubben and Darnell).
Consumers at the Age of 65 and Over
Dr. Fatma Zehra Savi, Kastamonu University, Turkey
Nadir Ateþoglu, Kastamonu University, Turkey
Murtaza Onal, HalkBank, Turkey
Studies of consumers in Turkey have focused on young consumers. However, when demographic changes in the world and in our country are taken into account, a need for a scientific study of the senior citizens’ market has emerged. The purchasing habits of senior citizens also deserve more study, because their attitudes about the market, views and experiences are changing as much as other parts of the population. In addition, senior citizens make their own decisions about consumption. Based on this fact, we studied the consuming habits of consumers 65 years of age and older in Turkey. The data were compiled through a questionnaire administered to 150 people over 65 years of age. The software program SPSS 11.5 was used to analyze the data. According to the results of the study, senior consumers’ most important priority is their health and the most important characteristic of products and services for senior citizens is reliability. Senior citizens spend most of their time with their children and spend most of their income on their grandchildren. The world is facing demographic changes. The world’s population is rapidly growing older. There are more than 600 million people over 65 in the world. This figure is expected to rise to 2 billion in the coming 30 years, and this trend is predicted to continue for 30 to 40 years. This means that consumers who are 65 years of age and over will determine what they will eat, wear, read, drive, how they will entertain, and where they will live and travel. In a traditional sense, marketing experts prefer to focus on young people. However, since the beginning of the 1990s, especially in Europe and the USA, important changes have taken place in the buying patterns of older consumers. A growing number of companies has realized the importance of senior consumers and have initiated marketing programs for reaching them. Another trend is the growing importance of products and services designed to reach this market. According to the Turkish Statistical Institute, Turkey’s population was 70 million in December of 2007. Among this population, 7.1% is 65 or over. This percentage is expected to rise of 17% in 2025 and to 30% in 2050. When Turkey’s large population and its demographic features are considered, senior people will comprise a big market. Although the senior population in USA and Europe has attracted much academic attention, this has not been the case in Turkey. Senior consumers are usually defined as people 65 or over. The social, economic, physiological and physical needs of these people change as they age. Decreasing income, increasing health problems, medical costs and degradation of social relations become important (Bilgin, 1989). For this reason, marketing experts must consider these demographic changes. Senescence is the “decrease in physical strength and mental abilities” (Mega Larousse-Dictionary and Encyclopedia, 1986). Studies of seniors differ according by age group. Some researchers define seniors as people who are 55 and over, but senescence is thought to begin at age 65 (Gilly & Zeithaml, 1985). There is no single definition of the senior consumer, because age is not a direct and valid criterion. Furthermore, many variables are related to senescence. Because senescence is a multidimensional concept, which means people are becoming biologically, emotionally and socially older, no age limit will bring a meaningful and valid definition. In short, people don’t always seem or act as though they are old, so in this respect the concept of senescence requires flexibility (Moschis, Lee & Mathur,1997). When literature about senior consumers is examined, it appears that senescence is defined in terms of calendar age. The period after 55 and 60 years is called senescence. (Baymur, 1984). In fact, senescence depends on each person’s own decision. An individual can be called “senior citizen” according to his or her own feelings and behavior. (Bilgin, 1989). In the society, there is a tendency to think of senior people as dependent, inefficient, chattering, forgetful and aggressive. However, many seniors maintain their creative, productive and constructive abilities despite their health problems (Geçtan, 1984) thanks to the advances in medicine, their numbers are growing. The markets are being redefined according to three important factors. The first one is that there is a great demand for senior people because their life span is increasing. The second is low birth rate which has decreased the number of young consumers. The third factor is that Baby-Boomers (born 1946-1965) are growing older. (Schewe, Balazs, 1992) Business and marketing executives have not yet accepted the concept of senior consumers because they cannot accept senescence. (Walfe, 1997). The reality,however, is that the number of senior consumers is increasing. This population will affect business enterprises in the following ways.(Moschis, Lee, Mathur 1997); Companies have to understand the needs of senior people and how they respond to different marketing techniques. Business enterprises have started to meet the needs of senior consumers by developing new products or improving ones. Senior citizens have become an economic force who can demand products and services that suit their needs and life styles. Business enterprises are affected by aging work force. This has a variety of implications for workers’ incomes, training, old age care programs and pensions. Businesses have started to realize the need to keep senior employees. As the population ages, younger workers have to look after elderly relatives. Service business, financial institutions, hotels and travel companies are more willing to cater to senior citizens and to see them as consumers. Many businesses in the USA have launched campaigns for senior citizens. For instance, the Kroger supermarket chain has a club for people who are over 59 and on a fixed income. There is a special shopping program and discounts for the members of this club. (Schiffman, Kanuk, 1987). In Turkey, Turkish State Railways has a campaign for senior travellers. Tours and special discounts are offered to citizens who are 60 and over under the motto “The ones who stay young”. Senior consumers will provide good marketing opportunities for the businesses in the future. Businesses and marketing experts must therefore follow the demographic changes and understand the behaviors that create market demand. The reason for the business’ lack of interest is that there has not been enough marketing research on the senior consumer behaviors. (Laudan, Bitta, 1988). Marketing researchers have paid more attention to younger consumers at the expense of elderly ones. (Schiffman, Kanuk, 1983).
Strategic Evolution: Fact or Facade?
Faiza Muhammad, Lahore University of Management Sciences, Lahore, Pakistan
The ‘strategic evolution’ construct is presented as a true depiction of organizational change’s reality. It establishes that, owing to the connectionism within the complex adaptive organizational systems, all changes not only incorporate systemic memory, accumulative learning and path dependencies but also carry environmental implications, strategic tones and cascade-effects. The adaptive competence of a firm is, therefore, seen as resting on its ability to balance structure and spontaneity of change programs, which in turn helps it achieve the rare mix of system fit and flexibility. A TST-HPWS framework of strategically evolving organizations that can optimally achieve this mix is also presented. The framework proposes a system design that not only ensures tight internal/horizontal and external/vertical fit but also ensures dynamic tweaking of this fit, with increasing agility. The pervasive hitherto paradoxical nature of organizational change has enthused a juxtaposed surge in the change management field, from perspectives as diverse as psychology, philosophy, social sciences, complexity science, strategic management and organizational studies etc. Not surprisingly, then, the existing literature presents theories and interpretations of organizational change that appear conflicting and inconsistent on the surface, atleast (Van de Ven and Poole, 1988, 1995). In addition, the conceptual realm within change management research comprises a complex labyrinth of classifying constructs such as types (Bartunek and Moch, 1987), levels (Weick and Quinn, 1999), approaches (Druhl et al, 2001), scope (Nadler and Tushman, 1989, 1995a), pace (Miller & Friesen, 1984; Gordon et al, 2000), order (Bateson, 1972) and linearity (Mezias et al., 1993) of change. More often than not, however, these criteria reinforce a single taxonomy tagging organizational change either as strategically intended or continuously emergent. This conception of change is supplemented with prolific labels such as radical versus incremental (Mezias et al, 1993; Burnes, 1992; Johnson and Scholes, 1993; Goodstein and Warner, 1997; Dewar and Dutton, 1986; Ettlie, Bridges, and O'Keefe; 1984; Nord and Tucker, 1987; Watzlawick et al, 1974), episodic versus continuous (Weick and Quinn 1999, Pettigrew et al., 2001), morphogenetic versus morphostatic (Smith, 1990), strategic versus non-strategic (Pettigrew, 1987; Rajagopalan and Spreitzer, 1996), planned versus unplanned (Wilson, 1992; Dunphy, 1996), top-down versus bottom-up (Druhl et al 2001), first-order versus second-order (Bateson, 1972; Torbert, 1989; Watzlawick,1978), piecemeal versus quantum (Miller and Fiesen, 1984), revolutionary versus evolutionary (Tushman and O’Reilly III, 1997), frame-breaking versus frame-bending (Tushman, Newman and Romanelli, 1986) and reconfiguring versus converging change (Gordon et al, 2000). The either-or constriction of rhythmic possibilities, imposed by this fragmented characterization of organizational change is quite ironic, though. More so because, conceptualizing change as ‘compact and sporadic epochs of divergence interrupting otherwise steady periods of convergence’ (Mezias et al, 1993; Tushman & Romanelli, 1985; Bacharach et al, 1996; Gersick, 1991, 1994; Miller, 1990; Miller & Friessen, 1980) not only impedes a thorough understanding of its rather holistic nature (Druhl, 2001; Frost 1993, Pettigrew, 2001) but also carries obvious repercussions regarding effectiveness of corporate change attempts. This paper therefore presents a critique against enforcement of dichotomous domain segregation between notions of continuity and strategic-orientation in organizational change, through highlighting their individually specific inadequacies as well as mutual interdependence and joint significance. The purpose of the paper, then, is to suggest a departure from differentiation in dominant change approaches and provide a theoretical framework to facilitate this departure. The primary contribution of the paper is a new unified perspective on organizational change to guide future research along with a framework of guidelines for organizations undergoing change in turbulent environments. The impetus for this research draws on five ground arguments encompassing both academic trends and corporate facts: 1) Most of the corporate change attempts meet with very limited success i.e., about 70 percent or two-third of all change efforts meet with failure (Druhl, 2001); 2) Accelerating environmental complexity of the new millennium articulates organizational change as a research issue of greater precedence and salience than ever (Greenwood & Hinings, 1996; Grinyer & McKiernan, 1990; Van de Ven, 1992); 3) The literature on organizational change has recently been critiqued for its acontextual nature and under-representation of actual corporate focus on change (Pettigrew, 1985, 2001). This has inturn revived academic interest in change research and triggered a new curiosity about pace and sequencing of change actions (Gersick, 1994; Kessler & Chakrabarti, 1996; Weick & Quinn, 1999); 4) Owing to their restricted outlook on organizational journey, all existing approaches to change are susceptible of partiality (Druhl, 2001). Still, even the most contemporary theoretical developments in change management remain grounded in these dubious change approaches. Consequently, future research requires embracing a new comprehensive perspective on change (Oswick, 2005; Burke, 2002; Cummings and Huse, 2001; Dawson, 2002; Olson and Eoyang, 2001; Senior, 1997), based on a detailed contemplation of existing approaches; 5) Discerning between existing change approaches has become increasingly difficult due to the ability of each approach to extend and cause implications similar to the other (Cao et al, 2000). What the field therefore needs is more comprehensive frameworks, categorization schemes, models and more effort by theorists to consolidate existing knowledge (Woodman, 1989) The first half of the paper delves deeper into the intricacies and inherent partialities of strategic and evolutionary changes. The second half explains their mutual compliance, linkages and interdependencies, based on several debates and constructs in organizational science. The concluding section presents a guiding framework for a flexible system-design, ensuring continuous and dynamic renewal of organizational strategic stance. Based on the Kuhnian paradigm scheme of normal vs. revolutionary science, the prevalent categorization of corporate change approaches primarily distinguish between assuming incremental evolution or strategically taking on archetypal reconfigurations (Amis et al., 2004). Evolutionary change comprises gradual, cumulative and iterative i.e., continuous (Brown et al., 1991, 1997; Gordon et al, 2000; Pettigrew, 1985, 1987; Tsoukas, 1996) calibration of organizational routines (March, 1994) and constituents towards a convergence eon (Quinn, 1978). In this essence, evolutionary change strives for rectification of organizational elements that are not aligned with the overall system design (Nadler & Tushman, 1995). However, the employed modifications are mostly emergent i.e., implemented without any a priori enactment plan (Orlikowski, 1996) or formal guideline.
Web Advertising Beliefs and Attitude: Internet Users’ View
Norzalita Abd Aziz, Universiti Kebangsaan Malaysia, Bangi
Dr. Norjaya Mohd Yasin, Universiti Kebangsaan Malaysia, Bangi
Dr. Sharifah Latifah Syed A. Kadir, Universiti Malaya, Kuala Lumpur
The digital age has already made significant changes to each of the elements of the promotion mix. Companies increasingly see the Internet as an important medium through which advertising messages can be directed towards consumers. In the 21st century, consumers gave more control over advertising exposure with web advertising because they can select how much commercial content they wish to view. However, very little is known in consumer beliefs about Web advertising, attitude toward Web advertising or Web advertising associated with consumer behaviour in Malaysia. By adopting and applying Korgaonkar, Silverbatt and O’Leary’s measurements, this paper explores Web users’ beliefs, attitude and use of Web advertising. The descriptive statistic, cross tabulation and factor analysis results as well as implications on the findings is discussed. The digital age has already made significant changes to each of the elements of the promotions mix. Companies increasingly see the Internet as an important medium through which advertising messages can be directed towards consumers. Strauss and Frost (2001) explained that marketing communications consisting of sales promotion, public relations, direct marketing and advertising comprise an important of e-commerce strategy, where electronic marketers use these tools to create brand awareness, preference and selection. Web advertising appears to be the most important influence on the future of the advertising industry within 10-15 years (Ducoffe, 1996). Internet is a communication medium, allowing companies to create awareness, provide information and influence attitudes. Advertising on the web can be useful in creating awareness of an organization as well as its specific product and services offerings. It also offers the opportunity to create awareness well beyond what might be achieved through traditional media (Belch and Belch, 2001). In the 21st century, consumers have more control over advertising exposure with web advertising because they can select how much commercial content they wish to view. Consumers can gather pricing information, participate in product design, explore promotions, arrange delivery, sales and receive post-purchase support. Advertising is one of the main approaches firms employ to manage demand risk by raising awareness of their products. In the mid 1990s, the World Wide Web emerged as a new tool for reaching consumers and provides a variety of technologies for influencing opinions, and wants (Boudreau and Watson, 2006). There is still minimal published research available on consumer’s evaluation of web advertising. This information is valuable for marketers to make decision in determining their media mix. It also important for academician to further understand and find out more the perceive position of the web advertising among local web users. Beliefs concerning specific attributes or consequences that are activated and form as the basis of an attitude are referred to as salient beliefs. Thus, it is very important for marketers to identify and understand these salient beliefs. By recognizing the salient beliefs varies among different market segments, demographic traits, different consumption or usage situations, from time to time will help marketers to develop a suitable and right strategy in their advertisement. The primary goal of this study is to explore and gain understanding of internet users’ beliefs about and attitude formation toward web advertising in Malaysia. It should be noted that currently there is no previous research attempted to demonstrate on how Malaysian internet users perceive web advertisement despite the increasing number of consumers getting on the web. Kotler (2001) defined advertising is any paid form of nonpersonal presentation and promotion of ideas, goods or services by an identified sponsor. Web advertisement consists of impersonal commercial content paid by sponsors, designed for audiences, delivered by video, print and audio. Further to its broad form ranges from corporate logos, banners, pop-up message, email messages and text-based hyperlinks to official web sites. (Ducoffee, 1996; Schlosser et al., 1999). A more effective form of Internet advertising recognizes that Internet traffic is concentrated around a relatively small number of high content sites or through portals which are the access gateways that Internet users have as their starting point to surf the net. These sites are particularly attractive to advertisers, who are increasingly anxious to market their products on them. Banner is advertising space on the website will carry advertisements and it is often animated in order attract users to click to the relevant page on the advertiser’s own web site (O’Connor and Galvin, 2001). Web ad can be standalone or part of a larger web site that may also serve other functions like customer support, distribution and social service. Many e-marketers preferred customers or web users to visit their web ad because by visiting it, this will increase, or build traffic at their web sites. The complexity of web site background would influence consumer attitude and simpler web site found to be significantly giving positive impacts on consumer attitude toward the advertisement and brand (Brumer and Kumar, 2000). The crucial role of advertising in informing and persuading consumers therefore it categorized as an important part of electronic communications strategy. According to Novak and Hoffman (1997) classified that advertising on the web as banner advertisements or target advertisements. They defined banner advertisement as a small rectangular graphic image that is linked to a target advertisements and it serves as a lead-in to the visitor to surf and find out more information. A target advertisement is a series of linked web pages that are accessed by consumers by actively clicking on a banner advertisement. Target advertisement also can be a single web page and does not necessary need to be as series of web pages link to the banner advertisement. Banner advertising is the most common and accepted form of paid advertising on the Internet. The purpose is to create small live pointers to the promotional web site. The culture of the Internet is still predominantly opposed to advertising and believed that it does not create value, is not relevant and creates a nuisance (Dann and Dann, 2004). It is considered ineffective because of the low click-through rates for banner ads, lack of useful information, its dullness, often offensive and it confuses consumers (Gaffney, 2001; Mathews, 2000) as well as disruption in flow by banners, pop-ups and other forms could create negative attitudes towards ads (Rettie, 2001). Briggs and Hallis (1997) and Gallagher et al., (2001) believed web advertising is the least effective media. However, Gaffney (2001) has contrasted view on online advertising; they indicated that online advertising is considered effective in generating sales. O’Connors and Galvin (2001) support this view; they indicated that banner ad could build up brand awareness and perception even though the users do not click on it. Schlosser et al., (1999) added that generally people trusted the commercial content of an Internet advertisement more that an ordinary advertisement. Their findings showed that respondents’ felt comfortable to purchase from a phone number listed in an Internet advertisement compared from phone number listed in traditional advertisement.
‘How We Do Things Around Here’: Implications of Corporate Culture On Job Performance
Raida Abu Bakar, University of Malaya
Dr. Abdul Latif Salleh, University of Malaya
Lee Chee Ling, University of Malaya
The effects of corporate culture on organisational performance have taken centre stage as most of the concerns on organisations ultimately comes down to the profit margin and success that they could achieved. Hofstede and Bond (1988) suggest that the power behind the economic rise of the East Asian’s economy, which outperforms its Western counterparts, has more to do with its cultural practices. This study identifies the influence of corporate culture on job performance among executives in Malaysia. Findings confirmed that competitive culture shows significant relationship with job performance. This finding can assist managers in defining the appropriate management development in their organisations to improve the employees’ job performance and ultimately the organisational performance. Organisational culture has caught the interest of researchers since 1980s as a result of its effects and potential impact on organisational success (Sheridan, 1992; Clement, 1994; Abdul Rashid et al., 2003; Li, 2004). Lim (1995) comments that a major obstacle in investigating his research on organisational performance and culture appears to be related to the application of the term “organisational culture”. The author reports that the definitional problem, as well as difficulties in the measurement of organisational culture, seems to have contributed to the inconclusiveness of the research. The above remark is supported by O’Reilly (2001) who explains that failure in clearly defining the culture will result in confusion, misunderstanding and conflict about its basic function and importance. As a matter of fact, Abdul Rashid et al. (2003) argue that the importance of identifying the nature and type of corporate culture is to elicit the key values, beliefs and norms in an organisation that has been proven to give much impetus to the success and superior performance of the organisation. The beginnings of formal writing on the concept of organisational culture started with Pettigrew (1979) and Siew Kim and Yu (2004). Pettigrew (1979) introduces and illustrates some concepts of the more cultural and expressive aspects of organisational life that are widely used in sociology and anthropology. He approaches it through the concepts of symbol, language, ideology, belief, ritual and myth. Each of these is symbolic in the special sense that taps into and is expressive of the “deeper layers of meaning” that are inherent in all human forms of organisation and of culture itself (Dandridge et al., 1980). Dandridge and colleagues argue that the field of organisational behaviour has thus far mainly studied the surface structure of organisations, while the organisation symbolism, according to them, is being the deep structure of organisations, adds a complementary and important view of subjective vitality within it. Of the many works in anthropology, sociology and organisational behaviour, many definitions of culture have arisen but ultimately implying the same. Researchers on corporate cultures have proposed different forms or types of cultures. Desphande and Webster (1989) define organisational culture as the pattern of shared values and beliefs that help individuals understand organisational functioning and thus provide the norms for behaviour in the organisation. There are researchers, such as Van de Post et al., (1998) who define culture is to the organisation, as what personality is to the individual. In other words, culture is classified as a hidden but unifying force that provides meaning and direction. It is also a system of shared meanings, or systems of beliefs and values that ultimately shapes employee behaviour. At the lower levels in the organisation, performance and achievement of an individual are taken into consideration. However, when an individual advances through the corporate ladder, how well he/she “fit in” with the organisational culture becomes increasingly important (Wallach, 1983). Meanwhile, O’Reilly (2001) defines culture as a control that can be thought of as a potential social control system, … If one wants to be accepted, one has to try to live up to other’s expectations. With formal systems, people often have a sense of external constraint, which is binding and unsatisfying. However, with social controls, one often feels as though one has great autonomy, even though paradoxically one tends to conform much more. According to the Desphande and Farley (1999), there are four types of corporate culture, namely competitive culture, entrepreneurial culture, bureaucratic culture and consensual culture. Competitive culture is characterised by an emphasis on competitive advantage and market superiority; entrepreneurial culture emphasises innovation and risk taking; bureaucratic culture is characterised by internal regulations and formal structures and consensual culture emphasises loyalty, tradition and internal focus. Therefore, the first step to determine continuing success of organisation would be for managers to determine or ensure an appropriate and specific type of culture or combination of the cultural types is developed throughout the organisation. The result from Deshpande, Farley and Webster’s (1993) research shows that firms with cultures that are relatively responsive (market) and flexible (adhocracy) outperform consensual (clan) and internally oriented, bureaucratic (hierarchical) cultures. Similar observations are being made by Desphande and Farley (1999) and Denison (1984), in which entrepreneurial and competitive cultures perform better than consensual and bureaucratic cultures. The latter types of culture were more inward looking and closed than the former, which is more innovative and risk taker. From Pool’s (2000) constructive culture, he explains that organisations with such culture embrace creativity, which promotes quality over quantity of work. This type of culture matches well with Desphande and Farley’s (1999) entrepreneurial culture and thus, again proves that it is highly suggested for organisation to cultivate entrepreneurial type of culture. It is important for management to consider the positive effects of the above culture to promote work outcomes such as job performance and job commitment. The above researchers confirm that business performance is a complex, multi causal matter that depends on internal factors of the organisation as well as strategy. Although researchers have made theoretical and methodological advances in understanding the development of cultural values in organisations for the past two decades, there has been less progress in comparing cultural effects on employee behaviour across organisations (Sheridan, 1992). As described by Harris and Mossholder (1996), the influence of individuals’ congruence with an organisation’s culture on their affective orientations toward the organisation is not significantly uniform for job satisfaction, job involvement and job turnover intention. Meanwhile similar observation is made by Li (2004) in the study on the effect of organisation culture and leadership behaviours on organisational commitment, job satisfaction and job performance at small and middle-sized firms of Taiwan.
The Role of Perceived Equity in Relationship Quality and Relationship Outcomes: An Investigation of Retail Loyalty Programmes in Malaysia
Nor Asiah Omar, University Tun Abdul Razak, Malaysia
Dr. Rosidah Musa, University Technology Mara, Malaysia
This paper is positioned to determine the effects of programme perceived equity and relationship quality on relationship outcomes. Building upon extant literature review, relationship quality is conceptualised as a higher order construct comprising of programme satisfaction and programme trust. This study endeavour to impart insightful explanations of the influence of relationship quality on relationship outcomes based on two abstraction levels of loyalty: loyalty to the programme and loyalty to the store. The data set utilised in this study has been collected via drop-off and collect technique. Consumption behaviour of 400 retail loyalty programmes’ members in Malaysia was analysed. A comprehensive conceptual model was developed and tested by structural equation modelling using AMOS 6 programme. The findings unveil that programme satisfaction significantly influenced loyalty towards the programme but not store loyalty, whereas programme trust has significant impacts on both programme card loyalty and store loyalty. The discussion of these findings reveals important directions for future research and management practise. Relationship marketing which specifically emphasised on the management of customer relationships in business is not a new phenomenon (Berry, 1995; Sheth and Parvatiyar, 1995). Evidently, it had a major impact on marketing activities including increased customer cooperation, increased purchases and decreased customer defection (Morgan and Hunt, 1994). The growing interest in relationship marketing led to numerous attempts to measure the quality of a relationship. Fundamentally, a relationship may be seen to exist when both parties mutually perceive that the relationship exists and the relationship must be characterised by a special status (Barnes, 1997). Relationship quality has been found to have an influence on several important relationship outcomes across business-to-business and business-to-consumer domains. For example, a buyer perception of relationship quality has a significant and positive effect towards relationship outcomes such as purchase intentions (Hewett et al., 2002); word of mouth and loyalty (Hennig-Thurau et al., 2002). Cronin and Taylor (1992) postulate that the link between relationship quality and profitable outcomes was strongly consistent with characteristics of attitudes measures. In a similar vein, Huntley (2005) demonstrate that when the quality of the relationship is high, customers are more willing to recommend the seller’s offerings to colleagues and they are likely to purchase more from the seller. Generally, the features of relationship marketing indicate that loyalty programmes are likely to prove an effective tool within a relationship marketing framework. Retailers have viewed the use of retail loyalty programme as a vehicle to develop a relationship with their customers (Worthington, 1990). Cohen and Hunt (1994) contended that the dominant attribute of loyalty programmes is their long lasting effect which has considerably wider influences as compared with short-term promotional activities. According to Schiffman and Kanuk (2004), many firms have established relationship marketing (or loyalty programmes) to foster customer loyalty towards their products and services. Store loyalty programmes were widely developed during the mid-1990s. This is thought by many to show that retailers have embraced keenly the idea of developing closer relationships in their fight for the customer (Pressey and Mathews, 1998). Despite loyalty programmes are widely in use, there is little empirical research has investigated whether the programme actually contributes to loyalty (Yi and Jeon, 2003; Sharp and Sharp, 1997) and little is known about how loyalty develops (Morais, Dorsch and Backman, 2004). According to Yi and Jeon (2003) there is relatively little empirical research concerning the mechanisms by which the loyalty programmes operates. Some researchers state that most loyalty programmes are in fact saving programmes in disguise that do not contribute to the attitudinal component of loyalty and thus do not create sustained loyalty (Bellizzi and Bristol, 2004; Uncles, Dowling and Hammond, 2003). In fact, Bellizzi and Bristol (2004) assert that it is empirically proven that loyalty card is not associated with store loyalty. In fact, it was strongly asserted by Morais et al. (2004) that loyalty programmes should focus on investments of intangible and particularistic resources to their most valuable customers. Recently, few researchers (see, Bridson, Evans and Hickman, 2007; Hallberg, 2004) suggested that companies might benefits most by focusing on non-price-related special treatment benefits. For example, Hallberg (2004) suggested that loyalty programmes that go beyond simple financial rewards and develop personal relevance and emotional response, win big in terms of bonding while, those that do not develop personal relevance and an emotional response enjoy a smaller bonding advantage. Nevertheless, other factor such as fairness could also influence the effectiveness of the loyalty programme. The studies of Bolton and Lemon (1999) and Bolton, Kannan and Bramlett (2000), indicate that perceived fairness is important in determining the length and depth of a relationship. Firms that fail to project an image of fairness cannot develop the level of customer confidence needed to establish loyalty (i.e. repatronage) (Seiders and Berry, 1998). Few researchers (see Magi, 2003; Shoemaker and Lewis, 1999) noted that most of the consumers could not see the added benefits of using a loyalty programme. The majority of the cardholders feel that, there was simply not enough in the offer to make them want to engage in the transaction, let alone a relationship. The most recent study was by Nobel and Philips (2004), in which they argue that the focus of perceived effort and loss seems to illustrate that consumers who were reluctant to form relationships with retailers saw these relationship marketing programmes as one-sided that is, involving effort or loss on their part. Furthermore, they further point out that the findings are consistent with equity theory which postulates that when the inputs are perceived as being greater than the outputs the relationship is viewed as inequitable and potentially abandoned. Even though several pieces of empirical evidence suggest that equity is one of the significant factors in social interactions (i.e. buyer-seller) satisfaction that subsequently assist in building long-term relationships and other favourable behavioural consequences (Olsen and Johnson, 2003; Teo and Lim, 2001), to the best of the author knowledge there is no prior study that has examined the impact of perceived equity toward relationship quality particularly in the context of retail loyalty programme.
Profit-Making Ability Measurement in International Tourist Hotel
Dr. Ling-Feng Hsieh, Institute of Technology Management, Chung Hua University, Taiwan, R.O.C.
Shih-Ming Hsu, Institute of Technology Management, Chung Hua University, Taiwan, R.O.C.
This paper makes a measurement on the operational performance of the international tourist hotel industry in terms of its profit-making ability. The principles and standards for measuring the profit-making ability of the international tourist hotel industry are raised in this paper so as to establish a measurement model on the profit-making ability of international tourist hotel industry. For empirically improving the feasibility and accuracy of the measurement model on the profit-making ability of international tourist hotel industry, this study takes the international tourist hotels in Taipei area as its targets. We apply the profit-making ability measurement model to assess the profit-making ability of the 24 international tourist hotels in Taipei. Furthermore, we also make discussions on the relationships between the profit-making ability and market share. The results obtained by this study will be offered to the international tourist hotels for reference in operational strategies. The results show that market share and profit-making ability of the hotel industry are significantly and positively related. Such results act as important reference for the international tourist hotel industry in its operational strategies in the future. In recent years, the tourism industry has been regarded as non-chimney industry which can facilitate fast economic development by many countries in the world. The World Travel & Tourism Council (WTTC) predicts that the economic effects brought by developing the tourism industry will obviously be increasing in the future 10 years. Therefore, the tourism industry will inevitably play an important role in the future development of global economy. As the global tourism industry develops rapidly, the number of international tourist hotels continues increasing. The competition in the future international tourist hotel industry will become more drastic. With the measurement on the operational performance shown by the international tourist hotels, it would be helpful for the industry to set industrial strategies for facing the highly competitive environment. Reviewing the historic literature relating to the operational performance shown by hotels, it is found that most scholars measured the performance of hotels in terms of efficiency. The operational performance of the hotels emphasized by the scholars was on the maximum output obtained with minimum investment. However, they neglected profit-making ability as a measurement factor for explaining the operational performance of tourist hotels. Profit-making ability of the international tourist hotels directly influences the operational performance shown by those hotels. Meanwhile, profit-making ability carries important meanings to the investment plans and operational strategies made by the hotels as well as district tourism development. Therefore, this paper will evaluate the operational performance of the international tourist hotels in terms of profit-making ability. Additionally, we also set up the principles and standards for measuring the profit-making ability of international tourist hotels based on the characteristics of the services they offer. Then we can establish a measurement model on the profit-making ability of the international tourist hotels. For empirically improving the feasibility and accuracy of the measurement model on the profit-making ability of international tourist hotel industry, this study takes the international tourist hotels in Taipei area as its targets. We apply the profit-making ability evaluation model to measure the profit-making ability of the 24 international tourist hotels in Taipei. Furthermore, we also make discussions on the relationships between the profit-making ability and market share. The results obtained by this study will be offered to the international tourist hotels for reference in operational strategies. Reviewing related studies on measuring the performance of hotels, many scholars took the advantage of DEA model to evaluate the operational performance shown by the hotels. Among the literature, Tsaur (2001) used DEA model to measure the operational performance of the 53 international tourist hotels in Taiwan based on related data collected from 1996 to 1998. Hwang and Chang (2003) used DEA model to evaluate the management performance shown by the 45 international tourist hotels in Taiwan in 1998. Meanwhile, they also applied Malmquist productivity index to measure the fluctuation of efficiency from 1994 to 1998. They divided those hotels into six groups based on their management efficiency and fluctuation of efficiency and further stipulated the operational strategies for each group. Wang et al (2006) used four-stage DEA model and treated market pattern, management type and number of guest rooms as the external operation environment influencing the efficiency of hotels in order to assess the pure management efficiency of the 54 international tourist hotels in Taiwan. Yang and Lu (2006) targeted at the 56 international tourist hotels in Taiwan in 2002 and applied DEA method to evaluate related management performance. They also measured “input congestion” against inefficient international tourist hotels and found that about 40% of the international tourist hotels did not make full use of the floor area for staff and dining department. However, the international hotel chains with efficiency were easier to become the benchmarking. In the past, the scholars mainly focused on measuring the operational performance of hotels and few tried to explore the profit-making ability possessed by hotels. Pan (2005) discussed about the influence of market structure on the income of hotels, but he did not measure the profit-making ability of hotels. The measurement of profit-making ability is crucial to the operation and management of international tourist hotels; therefore, this study aims at establishing an evaluation model on international tourist hotels in terms of profit-making ability.
An Application of Hedging Fuel Price Risk in the Canadian Department of National Defence
Dr. Naceur Essaddam, Royal Military College of Canada, Kingston, Ontario, Canada
Derek Miller, Royal Military College of Canada, Kingston, Ontario, Canada
The objective of this paper is to study an application of private sector commodity hedging techniques in the Canadian Department of National Defence. The existing literature on the subject of financial risk management focuses almost exclusively on private sector motivations and rationale, with little attention to hedging in the public sector. The paper develops a rationale for the potential usefulness of hedging in the public sector based on reducing the volatility of cash flows and thereby improving budgeting and forecasting capabilities. This achieves a more optimal allocation of resources, as the size of the fuel budget surplus or deficit at the end of the year is minimized. The allocation is more efficient because it decreases the likelihood of pursuing low priority projects in the case of a surplus or taking away from high priority projects in the case of a fuel budget deficit. The methodology uses futures and call options contracts traded on NYMEX to hedge fifty percent of aviation fuel purchases at one Air Force base, 8 Wing Trenton for a two year period. The results show that such a strategy can reduce the standard deviation of the monthly purchase price, indicating the potential benefits of hedging within the Air Force. This paper examines the effects of applying financial risk management strategies in a public sector environment, more specifically the Canadian Department of National Defence (DND). DND is a government institution mandated to defend Canadian sovereignty, interests, and values and to enhance international peace and security. The Canadian Air Force is one of three elements, along with the Army and the Navy, fulfilling various tasks associated with this mandate. DND defines its main elements of risk in the publication, Integrated Strategic Risk Management in Defence (Treasury Board Secretariat, 2001). These elements are at the strategic level and are closely aligned to the uncertainties of environment, industry, and firm specific categorized for private firms (Miller, 2002). Even though the strategies for mitigating these risks are non-financial in nature and speak to the traditional risk assessment techniques embedded in military planning processes, it is apparent from government and DND publications that the department is making an effort to apply the “best practices” of modern risk management within the military environment (Department of National Defence, 2003; Treasury Board Secretariat of Canada, 2001). The DND plan outlines business planning and decision-making processes employed within the department as the key methods of identifying and managing risk. To date, the plan does not include the development of a financial risk management strategy that employs derivates to minimize exposure to fluctuations in volatile commodity prices. The Air Force faces significant exposure to fluctuations in the amount it spends on aviation fuel. These fluctuations are based mainly on changes in the price paid per litre for the commodity. To obtain some measure of this exposure we examined year over year percentage change in each Department general ledger account for the period 1986 to 2004 and calculated the average standard deviation of these changes for each account. These results provided an indication of the volatility of the yearly amount expended within each general ledger account. The standard deviation for aviation fuel was 17.1% for the period (Department of National Defence, 2004), indicating erratic percentage changes from year to year in the total amount spent on aviation fuel; this is depicted in Figure 1 below. This is significant when compared to the average of standard deviations for all accounts of 2.8%. The variance in the amount spent on aviation fuel is based mostly on the changing price of aviation fuel from year to year and provides some indication of the prevailing exposure within DND to price risk in this area. At a local level, the total fuel expense at 8 Wing Trenton, one of Canada’s Air Force bases, for fiscal year 2002/2003 was $10,017,635 and the Wing paid and average of $0.4091 per litre. The monthly costs are displayed in Figure 2 below. The standard deviation of monthly prices for this period was $0.0465 per litre, a significant variance from the average price paid. This local volatility in the price paid per litre for aviation fuel corroborates the volatility in the average percent change in the fuel expenditures at the national level, indicating an exposure to the risk of changing aviation fuel prices. Aviation fuel expenses comprise a substantial percentage of total operating costs and leave the Air Force exposed to the uncertainty of the price of fuel. As an important expense category for the Air Force, fuel costs are directly proportional to the overall pace of operations for the Canadian Forces. The operational tempo of the Canadian Forces has increased over the past decade. In addition to domestic responsibilities such as search and rescue, exercises, and training, the Air Force is regularly tasked to provide logistical support to operations in locations around the world: Afghanistan, East Timor, the Balkans, and Africa. Fuel purchases increase with the addition of such operations, further increasing the exposure to changing fuel prices and augmenting the argument for a financial risk management program within the Department.
Copyright 2000-2017. All Rights Reserved