The Journal of American Academy of Business, Cambridge
Vol. 14 * Num.. 1 * March 2008
The Library of Congress, Washington, DC * ISSN: 1540 – 7780
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Journal of American Academy of Business, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC. The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. No Manuscript Will Be Accepted Without the Required Format. All Manuscripts Should Be Professionally Proofread Before the Submission. You can use www.editavenue.com for professional proofreading / editing etc...
The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: email@example.com; Journal: JAABC. Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright 2000-2017. All Rights Reserved
Consumer Loyalty – A Synthesis, Conceptual Framework, and Research Propositions
Dr. Lance Gentry, Missouri University of Science and Technology, Rolla, MO
Dr. Morris Kalliny, Missouri University of Science and Technology, Rolla, MO
Numerous conceptual and empirical studies utilize the loyalty construct as a core part of their theoretical work. These studies purport to explain if and why loyal consumers are more profitable for firms, mental models of satisfaction and loyalty, and guidelines for marketing strategies. However an objective view of the literature shows little progress in approximately eighty years of research. In this article, the authors propose a conceptual definition of consumer loyalty, synthesize and discuss the probable factors of loyalty within a framework that is useful to scholars and practitioners. In 1923, Copeland wrote an article describing the theoretical relationship between brands and consumers’ buying habits. Albeit with different terminology, he described a continuum of consumer loyalty that incorporated both behavior and attitude. Throughout the next eight decades researchers have argued for measurements of loyalty that were strictly behaviorally based (e.g., Burford, Enis, and Paul, 1971; Cunningham, 1956; Passingham, 1998; Olsen, 2002; Tucker, 1964) or strictly attitudinal based (e.g., Bennett and Kassarijian, 1972; Guest, 1942; Jain, Pinson, and Malhotra, 1987; Perry, 1969). Many others have echoed Copeland’s original thought and argued for a two-dimensional construct with both behavioral and attitudinal components (e.g., Backman, 1991; Chaudhuri & Holbrook, 2001; Day, 1969; Gahwiler and Havitz, 1988; Newman and Werbel, 1973; Oliver, 1999; Pritchard, Howard, and Havitz, 1992). Tucker (1964) strongly advocated using a purely behavioral measure of loyalty, not because he dismissed the importance of attitudes, but because he predicted scholarly “chaos” would ensue if attitudes were included in the operationizations of loyalty. In Jacoby and Chestnut's (1978) extensive review of the brand loyalty literature, they found that most, if not all, of it suffered from extensive problems and that the results would probably not stand up to rigorous empirical analysis. "Despite more than 300 published studies, BL [Brand Loyalty] research is kept afloat more because of promise than results." They bemoaned the lack of an established conceptual base for operationalizations, which resulted in inconsistent and ambiguous measurements and definitions along with problems with arbitrary cutoff criteria. In addition, Jacoby and Chestnut criticized researchers for their simplistic perspectives on loyalty (e.g., failing to consider multibrand loyalty, ignoring the larger perspective of loyalty and disloyalty, concentrating on static behavioral outcomes vs. dynamic causative factors) as well as noting many basic methodological errors (e.g., using inappropriate or undefined units of measurement, or confounding relationships with other measures of loyalty). Pritchard, Havitz, and Howard (1999) referenced acknowledgements dating back to 1971 that the literature has focused on measuring loyalty, but fails to answer the question “Why are consumers loyal?” and they concluded that this predicament is still true. Our review of the current research indicates that the situation has not changed (see Choi, Kim, Kim and Kim 2006; Chandrashekaran, Rotte, Tax and Grewal 2007; Plamatier, Scheer and Steenkamp 2007). Researchers have determined that little truly known about loyalty and called for investigation into the fundamental meaning of loyalty (Oliver, 1999; Chandrashekaran, et al 2007), determining the long-term consequences of loyalty (Iwaskaki and Havitz, 1998), and investigating additional loyalty antecedents (Pritchard et al, 1999). The threefold objectives of this article are to: 1. Propose a common lexicon of loyalty by synthesizing the existing literature. 2. Discuss the probable factors of loyalty within a framework that is useful to scholars and practitioners. The majority of the existing literature on loyalty may be loosely classified as either consumer research or leisure research, with these terms incorporating all the contributing specialties (e.g., marketing, psychology, sociology, etc.). Even those researching voting behavior generally support the notion that people use the same general consumption processes for buying and voting (Crosby and Taylor, 1983). Both leisure and consumer researchers utilize loyalty constructs, but a direct comparison of the respective research is hampered by the different focus of the two groups. While consumer researchers are generally more interested in product and service commitment, leisure researchers tend to focus on activity adherence (Gahwiler and Havitz, 1998). Therefore, a conceptual definition of loyalty should apply equally well to both schools of thought. Thus, increasing the reliability of – or more realistically, enabling – comparisons of research by loyalty scholars from complementary disciplines. Oliver (1999) stated that a good definition of loyalty must tap into the psychological meaning of loyalty, not merely record what the consumer does. This echoes the original vision of Copeland (1923), yet the body of loyalty research cyclically undergoes phases where behavior – or less frequently, attitudinal – measures are thought to be self-sufficient. Then someone will point out that the emperor has no clothes and the cycle will start anew. It is with the hope of breaking this cycle that a conceptual definition is proposed. Since the definition contains behavioral and attitudinal components, operational derivatives should also incorporate both types of measures. Loyalty is a dynamic, favorable bias for a construct, which is always evoked for a relevant selection by a decision-maker; and a preferred construct will usually be selected over non-preferred alternatives in ceteris paribus situations.
The 3D Transformational Leadership Model
Dr. Eli Konorti, P. Eng., University of British Columbia, Canada
One of the most interesting topics of all times is leadership. Bass (1990) stated, “The study of history has been the study of leaders–what they did and why they did it” (p. 3). The first studies of leadership centered on theory. Researchers and scholars sought to identify leaders’ styles and compare them to the demands or conditions of society. In later years, as leadership became a topic of empirical study, researchers, academics, and scholars alike attempted to understand and define leadership. Definitions such as process, power, initiation of structure, influence, and others began to emerge. Bass (1990) postulated that scholars and researchers have debated and deliberated the definition of leadership for many years. Bass wrote that there are as many definitions of leadership as there are people attempting to define leadership. However, as one looks at the evolution of the leadership field, a trend emerges. The earlier definitions identified leadership as a movement and one that consisted of individual traits and physical characteristics (Bass, 1990). In later years, scholars used the term inducing compliance to describe the role of the leader. More recently, the view of leadership has become one of influencing relationships, initiating structure, and achieving goals (Friedman & Langbert, 2000). Starting in the early 1930s, theorists used pictorial models to explain their theories. The first few theories on leadership centered on types of leadership such as autocratic, democratic, and laissez-faire (Wren, 1990). Theorists later expanded the field of leadership to include human attributes such as ability and intellect. The leadership continuum started with the study of traits and proceeded to behavioral, situational, and eventually, contingency theories. Leadership models shifted their focus to leader traits and personality. For example, Wren (1990) wrote, “Charisma returned to leadership theory” (p. 386). These leadership models ranged from simple to very complex. Yet a close examination of these models and the leadership domain as a whole suggests converging definitions of leadership that subsequently led to a paradigm that was referred to as transformational leadership. Notwithstanding the transformational models that currently exist, there seems to be an inherent void in these models concerning a few traits and characteristics of transformational leaders that could be addressed with a new and innovative model. The purpose of this paper is to draw on peer-reviewed literature and emerging trends in transformational leadership with the intention of developing a new leadership model that looks at three leadership traits: courage, wisdom, and vision. The paper will discuss and attempt to reconcile the three traits and shed light on the relevance of these traits vis-à-vis transformational leadership. These three traits are incorporated into a three-dimensional model, resulting in a new transformational leadership model coined A 3D Transformational Leadership Model. The paper is organized as follows. First, I provide a literature review of historical and current thinking about transformational leadership. Second, I discuss the method and process I used to develop the new model. The following section discusses the conceptual framework and provides a definition of the three transformational leadership traits used in the model. Then, I report the data I collected and I discuss the first phase of the development of the model. The next section presents and discusses the theoretical model. The paper concludes with the implications of the new model and suggestions for future studies. To understand leadership, one needs to understand the concept of leadership. The introduction stated that leadership has many definitions. Weiskittel (1999) wrote, “Leadership is a complex and dynamic process” (¶ 1). She further stated, “In many ways [leadership] still remains somewhat of an enigma” (The Essence of Leadership section, ¶ 1). Changes in the competitive arena as well as globalization resulted in the need for a different type of leader. Followers often think of leaders as authoritarian, democratic, direct, or participative, but Black and Porter (as cited in Friedman & Langbert, 2000) suggested that at a minimum, a leader should recommend to followers to ignore self-interest. Furthermore, a good leader should maintain high motivational standards and be able to empower the followers. The change in follower behavior such as the need for teamwork and collaboration requires this new leader to ensure followers’ job satisfaction, personal growth, and maturity. This new leader must also create an environment that lends itself to and fosters the “well being of others, the organization, and society” (Bass, 1999, p. 11). To become a transformational leader, a person needs to develop and possess skills that go beyond basic management and administrative capabilities such as directing, planning, and delegating.
Supply Chain Expansion Using AHP, ILP and Scenario-Planning
Dr. Ruhul Sarker, University of New South Wales ADFA, Canberra, Australia
Dr. Sajjad Zahir, University of Lethbridge, Lethbridge, Alberta, Canada
A strategic supply chain decision problem is solved and results are illustrated with an example. First, a mathematical model is formulated for selection of facility locations by minimizing various costs. Uncertainties in future demand and other parameters are dealt with using a scenario based planning method. Finally, an AHP method utilizes several non-cost criteria to produce an integrated decision. The results of this research can be considered as the groundwork for the design of a computer-based decision support system (DSS) that will be able to meet real world needs effectively. A supply chain is “an integrated process wherein a number of various business entities (i.e., suppliers, manufacturers, distributors, and retailers) work together in an effort to acquire raw materials/ingredients/components, convert these raw materials/ingredients /components into specified final products, and deliver these final products to retailers” (Beamon, 1998). At the end of the chain, the retailers sell the products directly to the customers. A supply chain is usually characterized by a forward flow of materials and a backward flow of information between the business entities. Managing a supply chain requires operational level decisions. A supply chain operates on its existing business entities, with their facility locations and the network connecting those locations. Designing a supply chain is an important strategic decision in all organizations. Its importance has increased further as more organizations have been realizing the possibilities of gaining additional value for their customers by restructuring the supply chain. In fact, the growing awareness of the positive impact of supply chain management on organizations’ competitiveness, profitability, and strategic advantage has made supply chain a truly strategic issue and thus it has received increased attention everywhere (O’Laughlin and Copacino, 1994; Clinton and Calantone, 1997). Strategic decisions require information projection for many years in to the future. Such information is not usually available with certainty at the time when decisions are made. Decisions for new facility locations are crucial for supply chains operating under an uncertain environment. In this paper, we discuss a facility location problem for a supply chain serving different regional markets with possibilities for future demand, market, and cost changes which cannot be predicted with an acceptable level of accuracy at the time of planning. Where to locate and operate new distribution centres is an important decision to be made. In making such a decision, the trend is to develop a deterministic mathematical model and then solve the model for an optimal solution. The solution is optimal for one particular forecast of future market demand that may turn out to be far from optimal and very costly if the projected demand does not materialize. Usually sensitivity analysis is performed to show the effect of demand and other parameters change on the optimal solution. However, the sensitivity analysis may not provide any strong evidence for making a decision over the long term. Thus, in this paper, we introduce a methodology based on scenario planning and multi-criteria decision making to address the uncertainties of the future model parameters. The methodology is very challenging, and new in the literature, as it requires dealing with two different decision spaces namely location space and scenario space. In reality, the scenarios are non-comparable options. However it helps to incorporate the non-cost decision parameters and criteria in the decision process which are very common in any practical decision environment. As outlined in Figure 1, the facility location problem will be solved using a scenario-based approach. To tackle the future market-demand uncertainties, first planners will generate a list of possible scenarios qualified by a set of parameters such as expansion models, demand and cost forecast, and other external factors. Since all scenarios cannot be exhaustively investigated, domain experts will select a set of most likely scenarios for further detailed analysis. Each of the selected scenarios will be formulated as a mathematical model that will be solved using a suitable optimization technique. Specifically, we use integer linear programming (ILP) techniques to minimize the total cost of expansion. A sensitivity analysis is performed to see the effect of different parameters. In most cases it is unlikely that the management would be able to make a decision with confidence when such deterministic outcomes are obtained by considering the cost alone. Therefore, a number of other criteria affecting the location decision will be considered and a multi-criteria decision model will be developed to rank the scenarios. For this we specifically use the Analytic Hierarchy Process (AHP) (Saaty, 1980; Saaty and Vargas, 1994; Saaty, 1995) methodologies. The AHP is a popular multi-criteria decision model (MCDM) that allows both subjective and objective criteria in the same decision hierarchy. Finally, all this information will be combined before prescribing the final decision.
Determinants of Consumer Trust of Virtual Word-of-Mouth: An Observation Study from a Retail Website
Dr. Shahana Sen, Fairleigh Dickinson University, Teaneck, NJ
Research in communication has found that audiences establish a speaker’s credibility by his or her reputation, experiences and knowledge, as well as how much he or she can be trusted in a given situation. Extending this research, consumer psychologists have found that the persuasive power of person-to-person word-of-mouth communication is higher than marketer-generated communication, such as advertising and promotion. In this paper, we study consumers’ trust and consequently their perceptions of the helpfulness of virtual word-of-mouth, in the form of consumer reviews on the Web, that consumers have been increasingly relying upon, and test our propositions using observation data from an e-retail Website. Enabled by new information technologies, today’s consumers have real-time access to information, insight and analysis, giving them an unprecedented arsenal to help make purchase decisions (Delloitte, 2007). According to the Delloitte study, to build their knowledge arsenals, consumers are turning to virtual word-of-mouth (or e-WOM) in the form of online consumer reviews in large numbers, and these reviews are having a considerable impact on their purchase decisions. According to the Deloitte Consumer Products Group survey, almost two-thirds (62 percent) of consumers read consumer-written product reviews on the Internet. Of these, more than eight in 10 (82 percent) say their purchase decisions have been directly influenced by the reviews, either influencing them to buy a different product than the one they had originally been thinking about purchasing, or confirming the original purchase intention. The impact of word-of-mouth (WOM) on consumer decision-making has long been established by consumer psychologists (Brown and Reingen 1987; Feldman and Spencer 1965; Herr, Kardes and Kim 1991; among others). WOM information has been described as the most powerful form of marketing communication, and studies have shown that users find WOM more believable than commercially generated information (Hutton and Mulhern 2002). However, while e-WOM has some characteristics in common with traditional WOM, it is distinctive in that it shares other characteristics with marketer-generated communications, such as advertising, and additionally has unique ones of its own. For example, a shared characteristic with traditional WOM is that e-WOM is also communicated by consumers and not by the marketers of the product, making it more believable to the reader. On the other hand, as with traditional WOM, the audience establishes the speaker’s credibility by inferring his or her reputation, experiences and knowledge, as well as how much he or she can be trusted in a given situation. In the case of e-WOM, however, the reader is not familiar with the credentials of the reviewer and has to infer this by the cues that are present within the review and associated with its environment (e.g., the credibility of the Website may be one important surrogate). Besides that, quite often the review is featured on the marketer’s Website such as in the case of Amazon.com or BarnesandNoble.com, rather than an independent third party, such as epinions.com, consumerREVIEW.com or dooyoo.com. The positive source credibility effect, which aids traditional WOM coming from an impartial source, can be diminished in the case of e-WOM when it is featured on a site that sells the products that are reviewed on it. In addition, studies show that traditional WOM communications have a strong impact on product judgments because information received in a face-to-face manner is more accessible than information presented in a less vivid manner (Herr, Kardes and Kim 1991). Coming from a virtual source, e-WOM is less vivid, and its impact will not be the same as that of traditional WOM (Sen 2007; Sen and Lerman 2007). Considering the above, one may characterize the e-WOM domain as a hybrid of traditional WOM and commercial communications, and this gives rise to interesting questions about consumer behavior. Consequently, there is a growing interest in this area by consumer researchers (Sen 2007; Sen and Lerman 2007; Godes and Mayzlin 2004; Bickart and Schindler 2001; Chatterjee 2000; among others). Studying the phenomenon at the aggregate level, Godes and Mayzlin (2004) note that measuring the e-WOM generated by a firm’s product is important for understanding a product’s past sales level and for predicting its future sales. While at the individual consumer level, Chatterjee (2000) examined the effect of negative online reviews on retailer evaluation and patronage intention, given that the consumer has already made a product/brand decision, and has found that this effect is determined largely by familiarity with the retailer and differs based on whether or not the retailer is a pure-Internet or clicks-and-mortar firm. Bickart and Schindler (2001) found that when consumers were instructed to gather online information by accessing either online discussions (i.e., Internet forums or bulletin boards) or marketer-generated online information (i.e., corporate Web pages), the consumers who gathered information from online discussions reported greater interest in the product topic than did those consumers who acquired information from the marketer-generated sources. Recently, Sen and Lerman (2007) found that the perceived “helpfulness” of consumer reviews varied by Product Type (i.e., utilitarian vs. hedonic) and the Product Rating in the e-WOM review (i.e., positive or negative towards the product); and that when it came to trusting and finding the consumer review useful, a reader was more likely to perceive a negative review for a utilitarian product more useful than a positive one. This was consistent with the negativity effect (viz. the greater weighting of negative as compared with equally positive information in the formation of overall evaluations) that has been found by consumer researchers in the past (Kanouse and Hanson 1972; Weinberger and Dillon 1980). Interestingly, this was not the case for hedonic products, where a positive review was trusted and found more helpful than a negative one.
Intertemporal Linkages Between Hong Kong and Shanghai Stock Markets Surrounding the Handover of Hong Kong
Dr. Joseph French, University of Northern Colorado, CO
The linkages between the stock markets of Hong Kong and Shanghai are examined in this paper for the period before, during and after the 1997 handover of Hong Kong. Return relationships of the two markets are shown to have changed after the handover. Variance decomposition and Granger Causality indicate an increasingly important role of the Shanghai stock market relative to that of the Hong Kong stock market. The two markets are shown to be cointegrated and results indicate that this cointegration has increased after the handover. The existence of linkages across different national stock markets has important implications for investors who are seeking diversification opportunities internationally. When linkages suggest co-movement between different markets, any one market would be representative of the behavior of the group of markets. This would effectively reduce the scope for portfolio diversification possibilities. This implication has increased interest in the topic of market linkages and led many researchers to investigate whether different markets are interrelated. This paper looks at the intemporal linkages between the Shanghai and Hong Kong stock markets for the periods before, during and after the handover of Hong Kong. The linkages across markets that will be examined include contemporaneous co-movements, causal relationships, responses to cross-market shocks, and long-run interdependence. The handover of Hong Kong to China was a historic event that has real economic implications for the countries of the Asia-Pacific Rim. That is why this event in history is not just symbolic, not is it a question solely of political ownership. The stated objective of the Chinese government is to develop Shanghai into a leading financial center by the year 2010 (Asian pacific report). In the three years after the handover of Hong Kong to China, Hong Kong experienced continuing deflation and economic slowdown. Hong Kong’s sluggish economy rebounded in 2002 relative to a year earlier. The U.S-Iraq War and the SARS epidemic in early 2002, however, have apparently affected Hong Kong’s economy for this reason the period after 2002 was not considered in this paper. In his 1997 policy address, Chief Executive Tung Chee-Hwa emphasized that Hong Kong will increase economic cooperation with the Mainland and facilitate economic integration. Toward this end, Hong Kong has actively worked on a Closer Economic Partnership Arrangement (CEPA) with the Mainland. Much of the research using the methodology this paper applies has used data surrounding the Asian Financial Crisis (Maroney, Naka and Wansi 2004). Moon (2001) investigated the impact of the 1997 Asian financial crisis on stock market integration in East Asia and found that in the long and short run, East Asian stock markets have become increasingly integrated with the US market after the Asian Financial Crisis. This Moon said confirmed the view that the Asian crisis brought about US dominance over Asian stock markets and should increase the linkages between the two major national stock market indices Sheng and Tu (2002) examined the changing patterns of linkages among the stock markets of 12 Asian counties for the periods before and during the Asian financial crisis. The daily closing prices for the period from July 1996 to June 1998 were used to provide evidence of cointegration among the stock indices during the crisis but not before. Another study by Yang and Lim (2002) also used daily market returns from January 1990 to October 2000 and tested the extent of contagion effects among nine East Asian equity markets during the pre-crisis and post-crisis periods. They found no evidence of long-term co-movements among the East Asian markets, but only short-term correlations, in both sub periods. They found a substantial increase in the degree of interdependence, which they said reflected the presence of contagion effects in the region. Jang and Sul (2001) conducted a study to examine the changes in the co-movement among the stock markets of the countries that were directly affected by the Asian financial crisis in 1997 and showed that there is a significant increase in cointegration during the crisis period and thereafter. They also found no Granger-Causal relationship before the crisis, but a marked increase during and after the crisis. Cheung, Cheung and Ng (2002) applied cointegration techniques to daily equity returns in order to examine the interactions between the US market and the four east Asian markets of Hong Kong, Taiwan, Korea and Singapore before, during and after the Asian financial crisis and confirmed the dominate role of the US market in all three sub-periods. They also interestingly found that while the US market leads these East Asian markets before, during and after the crisis, it only Granger-caused they during the crisis. Using cointegration techniques to the daily data from 1977 to 1999, Fernades-Serrano and Sosvilla-Rivero (2000) found no evidence of long-run relationships among Asia’s top five stock markets of Japan, South Korea, Taiwan, Singapore and Hong Kong.
Dividend Policy Decisions
Dr. Gurdeep Chawla, National University, California
A company’s earnings are either distributed to shareholders or reinvested to finance projects. Dividend policy decisions involve level or amount of earnings that should be distributed to shareholders. Scholars have developed several theories to help managers in formulating dividend policies but the theories do not provide a hard-and-fast rule or clear guidelines. Actually, theories contradict each other by stating dividend policy is irrelevant to a firm’s value, advocating high dividends in some cases, and recommending low dividends in other cases. Empirical research has been inconclusive and does not validate a dividend theory. However, it has been helpful in explaining companies’ dividend policies, and investors’ reactions to dividend policies and changes in dividend policies. Managerial decisions in real world are usually guided by scholarly research and theories, and market and industry practices.This paper describes the dividend theories, dividend policies, and the factors managers consider important in formulating dividend policies. The paper begins with a review of literature and a discussion of dividend theories. It is followed by a description of dividend policies of companies in real world. Finally, managers’ view and important factors with regards to dividend policies are discussed.Merton Miller and Franco Modigliani (MM), the winners of Nobel prizes for their work in finance area, have argued (1) that a company’s dividend policy is irrelevant to its value. In other words, companies can pay no dividends, low dividends, or high dividends without impacting their stock prices. MM emphasized that a company’s value depends upon the types of investments it makes, the associated business risk, and the earnings generated. They argued that investors can generate their own dividends by selling their stock. Their arguments are valid under restrictive assumptions which include: 1. No personal or corporate income taxes. 2. No flotation or transactions costs. 3. Investors are indifferent between dividends and capital gains. 4. Companies’ dividend policies and capital budgeting decisions are independent. 5. Availability of symmetric (or same) information to investors and managers. Of course, MM proposition has been challenged because of its unrealistic assumptions. Litzenberger and Ramaswamy (2) developed tax preference theory and pointed out that there are taxes in real world and tax rates on capital gains have historically been lower than tax rates on ordinary income (the top tax rate on long-term capital gains is 20% but on ordinary income is 39.1% in 2001). In addition, taxes on capital gains can be postponed until realized (securities are sold) in future years but ordinary income is taxed in current year. Therefore, investors prefer that companies pay low or no dividends and postpone the distribution of earnings. For example, if an investor is going to receive 15% return on investment and has a choice between receiving 10% dividends and 5% capital gains or 5% dividends and 10% capital gains, he/she would prefer the latter option assuming the application of top tax rate. In contrast to MM proposition and Litzenberger and Ramaswamy favoring low dividends, Gordon and Lintner (3) have advocated high dividends. They developed “bird-in-the-hand’ theory and argued that dividends paid during current period are more certain than promises for capital gains and higher returns in future. They have further argued that investors are risk averse and require higher returns for taking higher risk. Therefore, investors prefer dividends and a company’s cost of capital (or investors’ required rate of return) would be higher if it pays low or no dividends. In other words, a company paying low or no dividends would experience higher cost of capital which would result in increased overall costs of doing business, decreased earnings, and lower stock prices. The MM proposition, tax preference theory, and bird-in-the-hand theory can summarized as follows: MM have called Gordon and Lintner’s theory “bird-in-the-hand” fallacy. MM have acknowledged that stock prices increase as a result of more than expected increase in dividends. They have stated that companies are usually reluctant to cut dividends and, therefore, an increase in dividends indicates managers’ expectations about increased earnings, increased cash flows, and better company performance in future. MM have further stated that a company sends positive information or signal to market about its future performance by increasing dividends and it leads to increase in stock prices. This has been called the information content or the signaling hypothesis.
Reaching the Underserved: How a Summer Business Program Influences First Generation Students to Attend College
Dr. Issam Ghazzawi, University of La Verne, CA
Christine Jagannathan, University of La Verne, CA
This paper represents an outcome assessment of a community outreach program that targeted underserved students from three Southern California Unified School Districts. The outreach program was developed with a mission to overcome issues that usually restrict the college ambitions of the targeted population. Fifty junior high school students (27 women and 23 men) participated in a three-week business camp that introduced them to topics such as management and organization, marketing, finance and accounting, economics, and creating an organization website. All classes were delivered by volunteered professors of the College of Business and Public Management at the University of La Verne in conjunction with participating business and community leaders. Before the program, 78% of participants indicated their willingness to go to college, while in the post assessment survey 96% indicated their desire to attend college. Only 2% indicated as “not sure” and 2% abstained due to illness. The widening of college enrollment gaps based on race are striking when compared to the narrowing racial gaps in high school and test performance over the same period (Kane, 2001). Additionally, large gaps in college-going exist because of family income. According to Ellwood and Kane (2000): Eighty percent (80%) of the students from the top income quartile attended some type of post secondary institution within 20 months of their high school graduation, as compared with fifty-seven percent (57%) of those from the lowest income quartiles. The gaps by family income were particularly large in four-year college entrance, with 55 percent of the highest income youth attending a four-year college at some point and only 29% of the lowest income youth (p.3-4). The authors also noted that while college enrollment rates for all income groups grew between the high school classes of 1980/82 and 1992, such increases were larger for students who came from middle and higher income levels families. Thus, they concluded that college enrollment gaps based on family income have been widening over time (Ellwood & Cane, 2000). To address this problem, many universities are developing innovative programs to reach out to students from low-income backgrounds. One such program, the University of La Verne’s REACH Summer Business Camp, sponsored and supported by the College of Business and Public Management, is now in its second year. It has gained a reputation of being among the best programs in motivating high school students to pursue college education as measured by the program’s graduating students’ inclinations to attend college (96% in 2007), and by the demand to add more students from the existing, participating districts and other school districts who want to be part of it in the future. The objective of this program is to put college within the reach of any student no matter how unattainable a goal it may seem. In essence, REACH provides participants with a taste of various aspects of college life in order to create and sustain their motivation to aim for college after graduating from high school. High school students from three Southern California Unified School Districts were nominated by one or more of their school career counselors, teachers, assistant principles, or principles to participate in the program. Said students were interviewed by the program director to make sure they fit the program’s criteria which included: (1) Students having shown an interest in business education but being at risk of not pursuing that interest at the university level; and (2) Students having the aptitude and discipline to pursue a university education (indicated by a grade point average of 2.5 or higher, and involvement in some extra curricular activities including service to the community, the school, or the family business), but being discouraged because of (a) financial issues, (b) family commitments, and (c) not having considered attending university. In an effort to help students overcome their family and income issues, the REACH Business Program gave fifty junior high school students three weeks to delve into the business world by providing them instruction in the areas of management and organization, marketing, economics, accounting and finance, business ethics, creating a business website, success skills, and entrepreneurship. All classes were delivered by the University of La Verne’s College of Business and Public Management professors who volunteered to work with the students. Additionally, a few business people from various organizations, including Southern California Edison, City National Bank, The Olson Company, and others, volunteered as guest speakers. The program also featured motivational speakers including the Mayor of Ontario Mr. Paul Leon, and the Superintendent of Rialto Unified School District Ms. Edna Davis-Herring. As a culminating activity, program management staged a business plan competition among participating students. The fifty students were divided into 10 groups composed of 5 students each. Their task was to win over a panel of judges comprised of professors and business people to whom the students would present their finished business plan at the end of the 3-week’s program. The winning group was awarded a $1000 prize and the runner-ups received a $500 prize. Each business plan had to include everything from the cost of the premises (lease) to the required licenses, and permits, and the cost of equipping, staffing, marketing, and operating said business for profit.
A Multi-Criteria Decision Support System for Selecting Cell Phone Services
Andre Yang, University of Lethbridge, Lethbridge, Canada
Dr. Sajjad Zahir, University of Lethbridge, Lethbridge, Canada
Dr. Brian Dobing, University of Lethbridge, Lethbridge, Canada
An increasing number of companies now provide cell phones for their employees. However, these organizations find selecting cell phone services to be complex with literally hundreds of rate plans, coverage areas and other factors to consider. A cell phone service vendor selection decision support system is designed and developed to determine the most cost-effective vendor and plans. Current business plans including pooling plans, designed for business users in the local market are incorporated into this system. A Search Decision Rule (SDR)-based algorithm, written in VB.NET, determines the most cost-effective vendor and plans. Critical non-cost factors which affect the selection process are determined from a survey conducted in the local community. Finally, an Analytic Hierarchy Process (AHP)-based decision model is adopted to facilitate this decision-making process. Cell phones have achieved high levels of market penetration in a relatively short time. According to the Canadian Wireless Telecommunications Association (CWTA, 2006), more than half of all Canadians are cell phone customers and 47% of all phone connections in Canada are wireless. For many organizations, equipping their employees with cell phones is an accepted operational cost. The industry-analyst firm Yankee Group estimates that businesses now spend a quarter of their telecommunications budgets on cell phone expenses (Allianceone, 2006). In Massachusetts, over 10% of cell phone bills are paid by employers (Cummings & Smith, 2005). Most areas have multiple cell phone service providers, and each typically provides a wide variety of plans with different cost structures. One Canadian company, which is admittedly in the business of helping organizations reduce their cell phone costs, claims that most companies are actually spending 20-50% more than they need to (Allianceone, 2006). There are several reasons for this. First, finding the most cost-effective plan from among so many choices is complex and time-consuming. Second, each employee can have a different calling pattern in terms of total minutes, where the calls are originating from or going to, and when the calls are placed. Often there is no single plan that is best for everyone. Third, cost is not the only factor to consider; service quality varies as well. Moreover, plan costs, calling patterns and service quality are constantly changing. While larger organizations can use specialized consulting companies and have the volume to get special discounts, smaller businesses are often very much on their own to determine which plan(s) is best for them. The goal of this research is to develop a Multi-Criteria Decision Support System (MCDSS) to help organizations, particularly small businesses, determine the best cell phone plans for their employees. Finding the lowest cost plan is relatively straightforward using a computer-based system; each calling pattern can be compared over all plan cost structures. To incorporate non-cost factors, a survey was conducted among small businesses to determine which they considered to be most important. These factors were then integrated into the MCDSS using the Analytic Hierarchy Process (AHP). The system allows decision makers to have different preferences for the importance of non-cost factors, different rankings for how each service provider performs on these factors, and different weightings between cost and non-cost factors overall. This research was conducted in a small city (population under 100,000) in Canada in 2006. At that time, there were four major cell phone providers, two owned by the same company, offering a total of ninety business plans with different rates. The rates have a similar structure across different plans. The main elements are a fixed monthly cost, per minute rates and additional options. The fixed monthly cost covers service fees (including system access and 911 emergency services) and often includes an allotment of “free” minutes. Once these have been exhausted, per minute rates come into effect. Per minute rates are based on when the call is placed and its origin and destination. Canadian cell phone service providers usually divide location into three categories: local, long distance within Canada and from Canada to the U.S., and long distance from the U.S. to Canada. Within each category, the minutes can be classified by time as weekday, evening, and weekend. Each plan can provide different free minute allotments and different per minute rates for each of the nine combinations of location and time. However, each provider currently has identical weekend and evening rates so there are actually only seven call types to consider. Some plans offer additional options that customers can select according to their needs and usage patterns (e.g., caller ID). Some affect costs, such as special rates when calling other cell phones from the same provider and business pooling plans that allow a group to share unused free minutes.
The Restructuring of the Banking Sector in Turkey After the Last Financial Crisis and Its Cost
Dr. Ýlhan Uludag, Professor, Kadir Has University, Istanbul, Turkey
The purpose of this paper is to analyze the costs of the Banking Sector Restructuring Program. The results shows us that especially the state-owned banks has experienced the most radical change during this process. The process of restructuring the state-owned banks has four subheadings; i) Financial Restructuring, ii) Operational and Technological Restructuring, iii) Organizational and Human Resources Systems and iv) Market Structure. The most remarkable point in this whole process was the costs resulting from high interests paid by the state-owned banks in order to fund their short term debts and daily liquidity requirements. Consequently it is obvious that banking crises create additional taxes and lead to a decline in spending in countries in addition to a burden created in terms of lost national income. This burden is shouldered by the citizens of the country. Conditions prevailing in the banking sector were one of the main reasons behind the last financial crises witnessed in November 2000 and February 2001 in our country. Although necessary regulations were enacted in the sector in terms of regulation, supervision, and risk management as part of the stability program launched before the crisis, they could not be fully put into force yet. In addition, failure to take necessary steps in order to find lasting solutions to problems facing the sector has made the banking crisis unavoidable. Efficient operation of the banking sector, its failure to perform its intermediation functions due to the funding of public debts, high risks taken due to the exchange rate policy in force, and finally negative effects of the state-owned banks on the market mechanism may be listed as the main reasons for the banking crisis. The restructuring of the sector has been based on four fundamental factors aimed at achieving the following goals; The restructuring of the state-owned banks financially and operationally Finding solutions to problems faced by the banks handed over to SDIF. Ensuring that private banks which were affected by the crisis acquire a healthy structure. Increasing the effectiveness of the regulation and supervision systems in order to help the banking sector to become more efficient and competitive (Ersoy, 2007). The Program for Transition to a Strong Economy" was put into action on May 15, 2001. The primary goal of the program is to restore stability in the financial markets by strengthening the financial structure of banks as a result of efforts to eliminate the problems which could not be resolved within the system and risks which occurred in the previous program and to restructure the banking sector financially and operationally. According to the program, restructuring efforts in the Turkish banking sector were aimed at achieving financial and operational restructuring of the state-owned banks banks, finding solutions to problems faced by the banks controlled by SDIF as soon as possible, and ensuring that the private banks which were operating with a fragile structure to become healthier. In addition, the program includes legal and institutional measures aimed at increasing the efficiency of supervision and regulation of the banking sector and to enable the sector to have a more efficient and competitive structure. The above table in the Sixth Progress Report which was released on April 21, 2003 with regard to the "Banking Sector Restructuring Program" shows that restructuring the state-owned banks was among the four main components of the program which was put by BRSA in action in May 2001: Different reform packages proposed by the IMF and the World Bank and aimed at restructuring the national economy were put into action in Turkey which is one of the countries maintaining relations with international finance institutions in the past five years. The restructuring of the banking sector is described as one of the major components of the reform packages in letters of intent signed particularly with the IMF. The financial structures of the state-owned banks which played an important role in the exacerbation of the financial crises witnessed in 2000 and 2001 worsened during and after the crises and posed a threat to the healthy functioning of the banking system. Daily cash deficits which reached high levels made those banks more vulnerable to liquidity and interest rate shocks. Receivables arising from "duty losses" represented the most important reason behind the deterioration of the financial structures of the state-owned banks. Duty losses resulting from subsidy policies were usually not punctually paid by the state in cash.
The Persuasive Effect of Popularity Claims in Advertising: An Informational Social Influence Perspective
Yu-Yeh Chiu, National Taiwan University, Taiwan
This study explores the persuasiveness of popularity claims embedded in advertising and determines that popularity claims delivered by highly expert sources lead to higher advertising believability and more favorable brand attitudes than those from inexpert sources. In addition, consumers’ ad skepticism moderates the effects of popularity claims. Results of the experiment suggest that when consumers’ ad skepticism is low, popularity claims with highly credible sources yield higher advertising believability and more favorable brand attitudes than popularity claims from less credible sources. However, when consumers’ ad skepticism is high, there is no difference between popularity claims, regardless of the credibility of the sources. Marketing literature demonstrates that others’ opinions can influence a consumer’s evaluation of a product (Burnkrant and Cousineau, 1975; Cohen and Golden, 1972; Wooten and Reed II, 1998). The potency of this social influence on a person’s attitude and behavior can be great, especially if exerted by a majority (Areni, Ferrell, and Wilcox 2000; Darke et al., 1998; Mackie, 1987; Maheswaran and Chaiken, 1991). Processes of social influence also likely occur in an advertising context. For example, advertisers and marketers often use popularity claims in an attempt to persuade consumers by asserting that a majority of consumers prefer, are satisfied by, or use the advertised product. Actual advertisements featuring popularity claims use a variety of sources, including Crest’s famous claim that “eight out of ten dentists recommend Crest.” Although popularity claims are common in advertisements, few empirical studies examine their persuasiveness. Popularity claims may be persuasive because advertisers provide information indicating that most consumers have a certain opinion that other consumers accept as evidence of the product’s true nature. People assume that the majority opinion correctly reflects reality and infer that the product that a majority of consumers prefer or use must be a good product. When people do not know how to behave, they often look to others’ opinions as cues. This process becomes informational social influence when consumers accept majority opinion as evidence about a reality. Informational social influence occurs most often when others are experts or perceived as knowledgeable. Thus, we investigate how the source credibility of popularity claims may influence the persuasiveness of popularity claims. It is reasonable to assume that the opinions derived from experts or knowledgeable sources should be perceived as more appropriate and useful information and make the claims more persuasive. Some studies note that consumers’ skepticism toward advertising (i.e., ad skepticism) influences their evaluations of specific advertisements (Hardesty, Carlson, and Bearden, 2002; Obermiller, Spangenberg, and MacLachlan, 2005). Skepticism toward advertising, defined as a tendency toward disbelief of advertising claims, has become a widespread marketplace belief through consumer socialization and marketplace experiences (Obermiller and Spangenberg, 1998). For example, Hardesty, Carlson, and Bearden (2002) find that ad skepticism moderates the effects of advertised price claims, such that less skeptical consumers believe the advertised price claims for a less familiar brand and offer higher evaluations when the advertisement features high invoice prices compared with low invoice prices. In contrast, more skeptical consumers tend to disbelieve and disregard the advertised price claims for less familiar brands, so advertised invoice price claims do not yield a positive effect. According to these findings—namely, that consumers’ ad skepticism influences the believability of advertising claims and their evaluations of the advertised product—ad skepticism also may moderate the effects of popularity claims on advertising believability and brand attitude. Therefore, this study explores the effectiveness of popularity claims delivered by high- or low-credibility sources embedded in advertising and considers the moderating effects of consumers’ ad skepticism. We posit that popularity claims delivered by highly credible sources will lead to higher advertising believability and brand attitudes than popularity claims delivered by low credibility sources. When subjects’ ad skepticism is low, brand attitude and advertising believability should be greater in the highly credible popularity claim condition than in the less credible popularity claim condition. However, when subjects’ ad skepticism is high, they likely disbelieve the advertising claims, regardless of credibility, so their evaluations should be indifferent across the two conditions. Rare research applies the social influence perspective to an advertising context, but we consider notions of social influence to explain the mechanisms by which advertisements containing popularity claims may work best. The results of this study can help consumer researchers, advertisers, and marketing practitioners better manage the persuasiveness of advertisements that contain popularity claims and provide a stronger understanding of popularity claims.
A Study on Process Capability Index Applied to Banking Service
Chui-Yen Chen, Yuan-Ze University and Lecturer, Chin-Min Institute of Technology, Taiwan
With Taiwan's entry into the World Trade Organization (WTO), local banks not only face local rivals, but also foreign competitors. Operating under a violent, competitive business climate, banks’ ability to overcome obstacles and survive the bloody competition will depend on whether their own business management is efficient and the quality of service can meet customers’ expectations and demands. Generally, available banking service can be classified into local and international service affairs. Banks will focus on both local and international customers by opening dedicated service windows. To allow for highly efficient and flexible service, banks oftentimes establish windows for common services, with engagement availabilities for both local and international banking service. The number of common service windows for both services should take into consideration workload during peak hours and off-peak hours. This research focuses on experimental subjects from general commercial banks. The process capability index (PCI) is applied to explore the efficiency of banking service during both peak hours and off-peak hours. The aim of this study is to make banking service vested with high efficiency and improved service quality. These advantages will attract more customer sources for banking management. For 20 years, our economic conditions have continued developing. Our government recognizes that further improved industries must be based on banking services marked by high efficiency and quality. Thus, the Finance Ministry, abiding by the bank codes stipulated and amended in July 1989, permitted the opening of 15 new banks in June 1991. Banks that had been at ease for a long time started to face new competition. People are free to select any bank they wish and this situation places banks in a business climate full of competitors. With Taiwan's entry into the WTO, however, local banks not only face local rivals, but also encounter foreign competitors. Consequently, under such violent competition, a bank’s ability to survive will depend on whether its own business management is efficient and the quality of service can meet customers’ expectations and demands. For banks, an era emphasizing service quality is coming. Customers are exactly the counterparts for which banks provide their service. Customers’ comments toward service quality, therefore, will be of vast influence. Determining an optimal number of service windows available for banking service is of critical concern based on customer average arrival rate, average wait-for-service time and average receiving-service time. The said items will be closely connected with service quality. Generally, banking service can be classified into two types, namely local affairs and international affairs. Banks open dedicated service windows to focus on these two service items. However, taking higher efficiency and flexibility of banking service into consideration, banks often open windows for common service to take care of both local and international banking affairs. The number of service windows opened will directly affect the process for customers to receive service, including the measures of wait-for-service time and customer receiving-service time. Furthermore, the number of service windows either dedicated or common to the said items of banking affairs should also take peak hours and off-peak hours into consideration. Thus, this research is undertaken with regard to general commerce banks as our experimental subjects. The process capability index (PCI) is applied to explore the performance of banking service. By means of this research, service efficiency quality can be further improved so that banks can obtain a competitive edge and more customer sources. This research aims to explore the wait-for-service and service-completing time for customers visiting banks. It is hoped that the service time can be shortened to allow banks to provide higher service efficiency and shorter wait-for-service time for the customer. The expected experimental results will include: 1. Knowledge about customer average service time during both peak hours and off-peak hours.2. Analysis of customer average service time at both local and international service windows. 3. Expectations that by applying PCI, a service time table can be created to improve banking service quality, which will positively enhance competence.In summarizing the results of the aforesaid research, we can offer our suggestions to banks located in various areas so that they can improve service quality and win more customers. In this article, we view two aspects of literature on evaluating both PCI and service quality. In recent years, there have been many studies related to Process Capability Indices. We can use the PCI analysis information to set the border of process control; therefore, Process Capability Indices are an effective tool to evaluate the process quality. Many scholars have completed investigations in this field during the past few years including: Kane (1986), Chan et. al. (1988), Boyles (1991), Pearn et al (1992), Chen (1995), and Cheng (1994).
A Resource Allocation Process Model of Firm’s New Technology in China: Based on Mechanical Industry Case
Dr. Zhilong Tian, Huazhong University of Science & Technology (HUST), Wuhan City, China
Wenchuan Wei, Huazhong University of Science & Technology (HUST), Wuhan City, China
Through a case study based on a firm in the mechanical industry, we find that the investment decision-making of firm’s new product technology is a process of selection, bargaining and studying over time, and spreading over multilevel of the management. It is influenced by Strategic context, organization context, organization distinctive capability, customers and capital providers. Those new product technologies that are in alignment with a firm's strategy ,organization measures and rewards system will be more likely to won the favor of strategic capital. This paper provides us with a new insight into analyzing firm's behavioral pattern and its evolving process. In the process of transforming to the market economy, the enterprises in China have encountered some problems: backwards of technology, lack of creativeness in technology, dependence on introduction and short of international competition. These problems have drawn the attention of the government, scholars and entrepreneurs. How to make a decision on technology investment of new products becomes a valuable research field. Most of the researches made by scholars have been focused on the meaning of technological innovation, the process of technological innovation, the spread of innovation, assessment of risks, and management etc (Amabile,1988; Rusell & Russel, 1992; Fujiaji,1996; Luowei, Lianyanhua & Fangxin,1995). However, in the past 20 years, as industrial environment and technological trend changes, most of researches concerning investment decision on new products technology in China have adopted such quantitative method as projects optimization decision models and financial management theories to help enterprises make decisions, while ignoring the selection behaviors performed by all-ranked managers who compete for scarce internal resources, and organizational context and motivation system, information system, strategy context, investors, customers’ demand etc. This paper uses a case in a mechanic enterprise, with the framework and research method from RAP（resource allocation process）of Bower(1970) and Burgelman (1983), combined with the theories of resources relying theory and RBV theory, to explore technological investment decision-making process model for new products in Chinese enterprises, and to explore its relationship with enterprise strategies and RBV, so as to improve the efficiency of enterprises resources allocation. Das (1994) defined new technology as new knowledge or new process, which has the same or the combination of both, that can meet the functional demand of current technologies. He divided new technology into two categories: evolved technology and novel technology. The former refers to some improvements such as structure, references and function, on the basis of original technology, e.g. improvement of circulation fluid bed combustion technology vs chained boilers burning technology; while the latter refers to the technology for which initially there are only a few market demands and have not yet been recognized by the main markets,e.g. burning oil-gas combustion technology vs chained boiler burning technology. In markets, these two technologies compete with each other and satisfy the same functional demands in markets. This is the same as Christensen & Bower (1996)’s division of new technology: sustained technology and disrupted technology. This paper intends to adopt Das’ definition of new technology. For the theories on enterprises’ resources allocation, there are RAP model from Bower, resources relying theory (Pfeffer &.Salancik, 1978) and resources basic theory (Wernerfelt, 1984). The RAP theory explores the relationship between competitive process for resources and strategy forming in enterprises; RBV theory pays attention to the relationship among resources, capacity and strategic decision-making in enterprises but pay more attention to how enterprise resources influence external competitive process and result; resources-dependence theory sets the connection between enterprises and external environment. These three theories all focus on the influence of enterprise’ resources on strategy .The overlap of these theories provide a complementary view on the shaping enterprise’s strategy. Therefore, combining these theories - together to observe the same activities formed by enterprise, can not only test the viewpoints in original theories, but also help new ideas, beneficial for understanding the original theory (Bower, 1970). Bower(1970), in the research for RAP in multi-business firms, found that RAP is a process that involved by organizational multilevel, i.e. the bottom-up process formed by definition and impetus process, which is called B model (table 1). Resources allocation is a selection process in many proposals that compete for internal scarce resources; it is a strategic action that decides the strategic track of an organization (Bower& Doz, 1979). Strategic researchers consider B model as an efficient model in the process of studying strategies. For example, Burgelman (1983、1984) uses B model to make the study on the company internal entrepreneurship and strategic business exit.
Weather, Investor Sentiment and Stock Market Returns:
Evidence from Taiwan
Hui-Chu Shu, National Taiwan University
This study examines the influence of weather on stock market returns and investor sentiment in the Taiwan Stock Exchange. The weather variables examined consist of temperature, humidity, and barometric pressure. The empirical results show that the stock market returns and investor sentiment are significantly correlated with weather: the better the weather, the higher the returns and investor sentiment. Notably, this weather effect is more pronounced for individuals than for institutions. The findings of this study support the psychological argument that weather influences investor mood, which in turn alters investing behavior, and hence stock prices. Moreover, individual investors are demonstrated to be more likely to diverge from rationality in investments than are institutional investors. Building on the foundations of rational behavior and the efficient market, traditional financial theory assumes people always assess the accuracy and probability of possible outcomes with the aim of maximizing expected utility, and that asset prices will correctly and immediately reflect all of the correlated information. However, more and more empirical studies have found some anomalies in stock markets which cannot be perfectly explained in traditional ways. Therefore, financial economists have attempted to interpret the real phenomenon of the financial market from the prospect of cognitive psychology; hence, the field of behavioral finance emerged and opened a new area of economic and financial research. Weather has long played an important role in human life, and the impact of weather on mood and behavior has been well explored by psychologists (e.g. See the reviews of Howarth and Hoffman, 1984, and Keller, et al., 2005). As behavioral finance becomes one of the mainstream theories, the relationship between weather and stock returns has attracted considerable attention. Drawing on psychological literature, some studies have argued that the mood fluctuations induced by weather actually influence investors’ evaluations of assets, even though weather is irrelevant to asset fundamentals. Research on the relationship among weather, investor mood and stock returns is of interest because, if stock prices are influenced by economically neutral yet mood-related factors, this influence will cast a shadow on the efficient market hypothesis. However, previous research regarding the effect of weather on stock returns has provided no consensus conclusion. Most importantly, whether weather actually alters investor sentiment and behavior has not been demonstrated. The assumption that weather influences stock prices via investor mood fluctuation makes sense only if there is a clear association among weather, investor sentiment and stock market returns. This study aims to fill the gap by investigating the relationship between weather and stock returns, as well as between weather and investor sentiment. This issue is important for three reasons. First, if investor behavior is driven by weather-induced mood rather than reason, this indicates that psychological state can significantly affect investment decision-making and, therefore, that investor behavior may be not fully rational, which in turn results in market inefficiencies. Second, if stock price fluctuation can be attributed to weather variations, it implies that the valuation by irrational investors cannot be compensated for by the rationality of others. In addition, the factors or events that influence investor psychology have a systematic effect on asset prices regardless of their effects on asset fundamentals. Third, identifying the influence of weather on investor sentiment not only links mood and investment behavior, but also bridges the gap between environmental psychology and financial theory. This study focuses on the Taiwan stock market, which possesses three features that make it especially suitable for the purpose of this analysis. First, a potential criticism of prior research is that these studies use the weather in the city where the stock exchange is located to link with market returns, but market participants are geographically dispersed. In large countries, the weather in different regions may be quite distinct, so stock exchange weather will not be representative of the weather all investors experience. Since Taiwan is a small country, the weather in different parts of the country tends to be quite similar, making studies of the Taiwan Stock Exchange (TSE) largely immune from this concern. Second, the Taiwan market is dominated by individual traders and is highly volatile. Given that previous literature has suggested that individuals are more likely to make irrational decisions than are institutional investors, and that higher complexity and uncertainty increase the influence of mood on decision-making, if mood plays an important role in asset pricing, the weather effect should be pronounced in such a market. Third, prior studies have proposed that floor traders (Saunders, 1993) or market makers (Goetzmann and Zhu, 2005) are responsible for the relationship between weather and stock returns. As neither of these kinds of traders exist in the TSE, whether weather still influences stock prices is a meaningful subject worthy of exploring. If weather matters to the Taiwan stock market returns, it provides evidence that the influence mechanism of weather on asset prices is through the mood of most investors, rather than through that of a few persons.
Bankruptcy Costs and Bond Valuation
Dr. Yan Alice Xie, University of Michigan-Dearborn, Dearborn, MI
Dr. Howard Qi, Michigan Technological University, Houghton, MI
Dr. Hongming Huang, National Central University, Taoyuan, Taiwan
Bankruptcy costs are an important factor in valuing a firm and its debt. The Leland-Toft (1996) model is one of the most important firm valuation models that consider bankruptcy costs and corporate debt tax shields. However, it treats bankruptcy costs as a fraction of the (unobserved) endogenously determined bankruptcy threshold, making the model difficult to test empirically and implement in practice. This paper provides an approach to solve these difficulties. Our model is easier to implement and test while preserving the attractive features of the Leland-Toft model, such as endogenously determined leverage. Bankruptcy costs are an important factor in valuing a firm and its debt. While many firm valuation models built on the tradeoff theory come up with the consensus that the default likelihood and bankruptcy costs are two major factors for pricing a risky bond, how to measure bankruptcy costs and account for them in a model still pose great challenges to empirical tests and model application. For example, models by Leland (1994), and Leland and Toft (1996) (hereafter the LT model) show how to endogenously optimize capital structure and thereby predict the bond value. However, in their models bankruptcy costs are measured as a fraction of the endogenously determined bankruptcy threshold. Despite the elegance of the model, one difficulty is that the threshold is unobserved and therefore the bankruptcy costs may be difficult to accurately account for. Empirically, this makes advanced empirical methods such as Maximum Likelihood and Kalman Filter less reliable. This paper develops a model by modifying the LT model to address this issue. In particular, we measure bankruptcy costs based on bond’s face value. This allows us to incorporate bankruptcy costs more accurately in the model because of the greater availability of the data. At the same time, we preserve the desirable features of the original LT model, such as the endogeneity in optimizing the capital structure. Next we review the issues in greater details. Leland and Toft (1996) propose a tradeoff model that endogenously determines the bankruptcy boundary by maximizing equity value and firm value at the expense of bondholders. The LT model, investigating the tradeoff between the corporate tax advantages and bankruptcy costs, shows that issuing longer term debt better exploits the corporate tax advantages because bankruptcy boundary can be endogenously set at lower asset values. However, the LT model is difficult to test empirically and apply practically. This is because the endogenously determined bankruptcy boundary is unobservable and thus the bankruptcy costs cannot be accurately incorporated into the model. Due to this technical difficulty, the model performance can be questionable in some situations. For example, the model may noticeably overpredict the credit spread (see e.g., Johnson and Qi (2006)). From practical point of view, a reason to modify the LT model is the data availability. The Pacer (Public Access to Court Electronic Records) service is the most comprehensive bankruptcy database which provides the full-text source for bankruptcy documents, including debt/assets ratio, where assets recorded are right pre-bankruptcy (e.g., see Bris, Welch and Zhu (2006)). This allows us to directly infer bankruptcy costs as a simple function of debt face value. In particular, bankruptcy costs may be deduced as a fraction of the face value by using the information available in the Pacer system. This provides a good and much more reliable bankruptcy costs measurement to facilitate empirical tests of the model as well as set a benchmark to predict bond prices for similar firms. Moreover, a recent paper by Davydenko (2007) finds that for an “average firm”, the default threshold is about 72 percent of the face value of debt. Therefore, it would be desirable to modify the LT model such that the bankruptcy costs can be expressed as a fraction of the face value of debt rather than as a fraction of some unobserved endogenously chosen default boundary. From theoretical point of view, there is also a reason for our modification of the LT model. The LT model assumes that as long as the firm can raise additional capital from the equity holders to serve the debt interest payment, the bondholders have no power to force a bankruptcy even when the firm is operating with negative net worth. The rationale that a firm can possibly operate with negative net worth is that with higher asset volatility, equity holders find it more attractive to contribute new equity capital to make bond coupon payment in order to keep the firm alive. This is because with higher asset volatility, the firm has better chance of making larger profit while bondholders bear the brunt of the hit if the business runs into trouble. This can be better understood if we view the equity as a call option on firm’s asset and bond as a shorted put option with a strike price being the debt face value. The volatility of the underlying asset (firm’s asset) has a positive relationship with the corresponding option value. Therefore, with higher asset volatility, the firm will find it easier to raise new equity capital and use it to serve the debt interest payment and declare bankruptcy once the new equity capital is not enough to meet the debt service. This can result in a bankruptcy boundary lower than the face value of the firm’s debt. (1) That the firm can choose such a low bankruptcy boundary implies that the bondholders would recover very little once the firm declares bankruptcy. The lower the bankruptcy boundary, ceteris paribus, the smaller the bond value and hence the higher the credit spread. Perhaps this is why sometimes the LT model may overpredict the credit spreads on risky bonds. However, this may not be realistic since it implies that a firm can essentially drag on (till bond’s maturity) just by paying the coupons and the coupons can be much below the face value. Of course, this is not likely to be true since bondholders would not passively wait till bond’s maturity. They could force a bankruptcy before the firm value drifts too much below the face value of debt, which essentially sets a lower bound for the bankruptcy boundary. Conversely, suppose that in an ideal case, the bondholders can take immediate action once the firm value drops to the face value of bonds. Then the reasonable choice for bankruptcy threshold would be the face value. However, this is likely untrue in reality either. There is evidence that the absolute priority rule (APR) can be violated (e.g., Weiss and Capkun (2007), Eberhart, Moore and Rosenfeldt (1990), Betker (1995)). Bris, Welch and Zhu (2006) finds that APR is always followed in Chapter 7 liquidation and about 12% of the firms in their Chapter 11 reorganization samples had APR violated. Taken together, a fraction of bond’s face value (not 100 percent due to the possibility of APR) appears to be a convenient way of measuring the bankruptcy loss.
Health, Human Capital and Economic Growth: An International Comparison Study
Mei-Luan Huang, Nan-Jeon Institute of Technology, Tainan, Taiwan
Jen-Te Hwang, National Chengchi University, Taipei, Taiwan
Mei-Rong Chen, Bureau of National Health Insurance, Taiwan
In recent years, the studies of the relationship between health human capital and economic growth are gradually increasing. This study adopts Cobb-Douglas production function and uses panel data from 1993 to 2003 of OECD countries as well as Taiwan to conduct an empirical study. The aim of this study is to explore the influence, the contribution rate and the human capital external effect of the healthy human capital, the education human capital, and the physical capital grows in the economy respectively. The results of this study show that the human capital contributes most to the economic growth of OECD country and Taiwan. Especially, the human capital in health human capital is the largest factor, education human capital is the second, and the human capital external effect is the next. Although the contribution rate of health human capital of high income country group is lower than the low income country group, the high income country group human capital has the external effect. The high income country group enhances relatively education human capital and physical capital with the external economy effect. However, the contribution of healthy human capital in low income country group directly reflects in health human capital, therefore the human capital does not have the external effect. Economic growth has been the goal of endeavor of every country and it is also an important indicator for measuring the life quality of people of a country. Hence, economic growth has long been the center of gravity of study of macroeconomics. In the view point of macroeconomics, the two main production factors in economic growth are labor and capital, and the total production function of a country may be simplified as Q=F (L,K), when Q represents production, L represents labor production factor and K represents capital production factor. The dynamics and source behind economic growth of a country are, before industrial revolution, growth in labor and after industrial revolution, accumulation of capital. In the traditional economic growth theories, they stress mostly in labor and capital. Therefore, Nurkse (1953) believed that the developing countries with ample supply of labor, as long as there is sufficient capital, the economy will naturally take off. However, the facts showed that with labor and capital, the rapid economic development is not warranted. Since 1960s, many economists suggested human capital concept, such as Schultz (1961) and Becker (1964). Labor is heterogeneous and like capital production factor, it may be increased in productivity through investment for promoting the quality of labor. Grossman (1972) even regarded health as one of human capital specifically. Health capital is different from other human capital in that ordinary human capital will affect productivities in market and non-market activities and can affect the total hours employed in earning income or work. In other words, the other investments in human capital (such as school education or on-the-job training) are rewarded with increased wage, but the return to investment in health capital is in both increased working hours and wage. In contrast to the investment in physical capital, the investment in human capital is in education, training and uplift of health. Both physical capital and human capital are stock concept. Capital increased through accumulation and the process of accumulation is investment, so investment is the increment of capital. According to the viewpoints of Schultz (1961) and Becker (1964), the accumulation of human capital includes school education and informal education, such as on-the-job training, extended education and medical and health conditions. Among them, the major factors that affect human capital are education and health. Under endogenous growth theory, health is regarded as an important factor of human capital and has outstanding effect on social economic development. Emphasis on the investment in healthy human capital is an important path for enhancing the continuous economic growth and shortening the gap between wealthier and poor. The research on investment in human capital still concentrate in the effect of investment in education on economic growth, and hardly study from the effect of healthy human capital on economic growth. In fact, health is merit goods of a country and is the necessary conditions for quality citizens. Healthy citizen is an important part for promoting competitive capacity of a country and is the prime mover in the sustaining development. Therefore in this research, it is intended to study and verify the effect and contribution of healthy human capital on economic growth through the rapid growth experience of Taiwan and OECD states. The main purposes of this research are: (1) Exploring the importance and the effect and contribution of healthy human capital on economic growth; (2) Based on the economic growth experience of Taiwan and OECD to verify and to compare the influence and contribution rate in economic growth of healthy human capital, education human capital and physical capital separately; (3) Study and explore the effects of healthy human capital to countries of high and low income; and (4) Study the external effects of human capital based on the economic growth experience of Taiwan and OECD states.
Measuring Customer Satisfaction and Service Quality: The Case of Croatia
Jelena Legcevic, J. J. Strossmayer University of Osijek, Croatia
The conceptualization of service quality and the development of measurement tools and techniques aimed at assessing service quality and customer satisfaction levels have been a central theme of recent years. The research has examined the customer expectations and perceptions of service quality in health sector, using the original version of SERVQUAL instrument. The research hypothesis is that, between the expected and obtained service, there is a gap regarding dimensions of reliability, trust, tangibility, complaisance and identification between the service provider and service user. The goal of this paper is to measure the customer satisfaction and service quality in Croatia, i.e. in the Croatian health sector. The research has been carried out in the city of Osijek and its wider surrounding and 434 patients were surveyed regarding the service quality of the general practice doctors. Data were collected using questionnaires in two parts. The first part is concerned with patient’s perceptions of their health care doctor in general, while the second part is concerned with health care doctor in particular. The results have emphasized a negative gap between perception and expectation of given type of service. The biggest negative gaps are noted in dimensions of reliability and identification, which shows that patients are the least satisfied with promptness of doctor’s services and trust in the medical staff. The application of quality-management practices by manufacturers and service providers has become increasingly widespread. Quality is considered to be one of the management’s crucial competitive priorities and a prerequisite for the sustenance and growth of firms. The quest for quality improvement has become a highly desired objective in today´ s intensive competitive markets. The issue of quality has been increasingly emerging in the literature related to the organizational culture. The concept of quality has been used to describe the extent to which quality is important and valued in an organization, i.e. how much organizational culture supports and values the quality (Goodale et al 1997, Kelly and Moor, 2000, Jebston, 2001, Sureshchandar et al 2002). Firms that are clearly interested in providing outstanding customer value would be expected to have a culture that reinforces high quality. A culture that supports the quality is particularly important in service organizations, where simultaneous production and consumption of the service makes the control of quality rather difficult. Therefore, the measurement and management of service quality is the fundamental issue for survival and growth of service companies. Knowledge about the content and formation of perception of service quality enables organizations to deal with the fields that directly influence their competitive advantage and not to waste too many resources on unimportant fields. If service quality is to become the cornerstone of marketing strategy, one must have the means to measure it. The most popular measure of service quality is SERVQUAL, an instrument developed by Parasuraman (see Parasuraman, Zeithaml and Berry, 1985). His research on this instrument has been often cited in the marketing literature, as well as it has been widely used in the industry (Brown et al 1993). SERVQUAL is designed to measure service quality as perceived by the customer. Relying on information from focus group interviews, Parasuraman identified basic dimensions that reflect service attributes used by consumers in evaluating the quality of service provided by service businesses. Dimensions, for example, included reliability and responsiveness, while the service businesses included hospital services, banking and credit card companies. Consumers in focus groups discussed service quality in terms of the extent to which service performance on the dimensions matched the level of performance that consumers thought a service should provide. A high quality service would perform at a level that matched the level that consumer felt it should be provided. The level of performance that a high quality service should provide was conditioned by the customer expectations. If performance was below expectations, consumers judged quality to be low. To illustrate this, if a firm’s responsiveness was below consumer expectations of the responsiveness that a high quality service organization should have, the organization would be evaluated as low in quality of responsiveness. The focus of this paper is the customer satisfaction and service quality in Croatia. Croatia as a typical Central and Eastern European country has undergone a complex process of transition from commanded economy and socialistic regime to the market-oriented economy and democracy. The current structure of the Croatian economy – where 70% of GDP is created in the service sector – confirms that services play an important role in the (future) economic development while the competitive pressures stemming from globalization, information and communication technology (ICT) and other integration processes, particularly potential membership in the European Union (EU), require that the service increase in the quality. The goal of this paper is to measure the customer satisfaction and service quality in Croatia, i.e. in the Croatian health sector. The research has been carried out in the city of Osijek and its wider surrounding and 434 patients were surveyed regarding the service quality of the general practice doctors. The measuring instrument in this research has been SERVQUAL. The results have emphasized a negative gap between perception and expectation of given type of service. The biggest negative gaps are noted in dimensions of reliability and identification, which shows that patients are the least satisfied with promptness of doctor’s services and trust in the medical staff.
Marketing a European Experience to the Less Traveled
Patricia P. Carver, Bellarmine University, Kentucky
Dr. John T. Byrd, Bellarmine University, Kentucky
This article describes an innovative approach to implementing international experience programs in business schools in which resources have limited the number of interested students, particularly where working part time to pay tuition is a necessity. The article discusses the need for international experiences due to globalization, reasons for students not selecting courses which enable them to travel abroad, and an approach to overcoming obstacles. Finally, a description is provided of the content and approach of a small liberal arts school with an AACSB business program. This program is based on the rationale that global awareness is critical to success for the business student in today’s global society, but must be realistic given the nature of the student body and resources available to them. In the past several years, numerous business schools have attempted to internationalize their curriculums, students, and faculty. Accreditation bodies insist on expanding assessment activities at the program and course level. There have also been efforts at developing a model for international business courses that connect learning objectives to delivery methods. (Kashlak et al) Suggestions have also been made that the globalization of business education involves more than additional courses. This view focuses on building alliances with schools internationally. (Green et al) We have also seen attempts to rank and assess the quality of programs as well as the criteria. In this instance, faculty quality, research and the number and range of international business courses were the most mentioned criteria. (Ball et al) There is no doubt, that globalization is becoming an increasingly important issue in business education, and that experiencing different cultures is a value-added dimension of any program. Business schools in the U.S. have historically ignored the cultural dimension of education, and the business community has become increasingly cognizant of this fact. With the growth that has taken place in international business, student demand for knowledge about foreign business has increased. (Adler etc.) Increasingly, students are seeking international work and immersing themselves in other cultures and languages. There are even tips available on navigating a foreign job. (Lacey, p.2) This article describes an approach to, and description of, an international experience in a program in which students have not been accustomed to traveling abroad. It recognizes the fact that many business programs have simply added courses or additional material and felt this was sufficient to consider it international. Our approach and the forthcoming description is based on the stated position from many years ago that any approach should focus on goals, interests, and resources of the school to be effective. Furthermore, it requires the views and contributions of faculty, students, administrators, and the business community. (Korth, pp.325-327) With this in mind, a program was developed at Bellarmine University that recognized resource limitations, student perceptions, and faculty involvement. This is particularly true of the local business student who must work part time due to the limitations of time and resources. It is often assumed that students do not have the desire to travel abroad. This may be partly true when their only option is for a semester or more of study. However, in our initial experiment of encouraging all business students to incorporate an international experience in their studies, responses included those that have been documented before (Anderson, 1996). These include the fear of leaving the safety of familiar surroundings, academic pressures in their current area of study, or the lack of finances being the most common difficulty. However, the program at Bellarmine University, a liberal arts university, with an AACSB accredited program recognized the need to equip every business student with the opportunity to view the global business world firsthand. Due to the significance of globalization, travel abroad by business and engineering students have increased in recent years from 12,000 to over 42,000. (Timiraos, 2006) The approach of the Bellarmine business program is in line with overcoming time and resource obstacles. Scheduling of a trip for the business students (who often have jobs) works best at the end of the spring semester but before the summer session. The timing and convenience factor of the trip allows students and faculty an opportunity to return to the university in time to participate in the summer session. This framework allows the program’s approach to the international experience to be designed to make the course a “short-term comparative culture trip” and to use it to “introduce aspects of another country’s system” in order to provide an opportunity to expand their views beyond the U.S. (Gray, Murdock & Stebbins, 2002) This shorter 21-day trip includes activities in Kuftein, Milan and the Tuscany region. The focal point of the experience is in the Tuscany region of Italy because Robert Bellarmino, the University’s patron saint, was born in the hillside town of Montepulciano, Italy. The experience is intricately linked to the school’s campus development and the vision of the final state of the campus. In this respect, it allows students a chance to discuss how businesses develop visions as well as the how the linkages of Bellarmine University, Montepulciano, and Robert Bellarmino are intimately connected to the reality of globalization. Students visit universities and businesses, as well as many historical sites. Accompanying university faculty provide a “safety net” for those students who feel a need for additional guidance and/or guardianship.
The Impact of Data Mining on the Managerial Decision-Making Process: A Strategic Approach
Dr. Adem Ogut, University of Selcuk, Konya, Turkey
Ayþe Kocabacak, University of Selcuk, Konya, Turkey
M. Tahir Demirsel, University of Selcuk, Konya, Turkey
The amount of data using by business enterprises seems to go on and on increasing, and there is no end in sight. Among these data clusters, organizations should extract the effective data in order to make appropriate decisions. Organizations need effective tools for making correct, appropriate and informed business decisions. In this context, data mining is one of the powerful technology supporting companies for several decision making issues such as customer attrition, customer retention, customer segmentation, cross-selling, fraud detection, risk management, targeted ads, sales forecast, payment or default analysis, and internal control. In the light of these considerations, the goal of this paper is to capture the state of data mining utilization for strategic decision making processes in the business organizations. Furthermore, the future trends about data mining as one of the commonly used tools for the mamagerial decision-making process are elaborated. Companies collect and store huge volumes of data about their operations and their customers as part of their daily business in order to make effective managerial decisions (Klosgen and Zytkow, 2002). Data exist both within an organization and also outside its boundaries. By combining elements of these resources and analyzing data variables with apposite methodologies, firms can increase their understanding of how effectively their strategic initiatives perform in the competitive environment (Kudyba, 2004). In this context, data mining is a collection of techniques providing an efficient way of analyzing these voluminous databases in order to succeed in the business life. Data mining was emerged in late 1980s and developed in 1990s. It has its foundations in the fields of statistics and a specialized area within artificial intelligence (AI) known as machine learning. Each field has its own rules and techniques for problem solving. A basic difference between a statistical technique and a machine learning technique is in the assumptions, in other words, the nature of the data to be processed (Roiger and Geatz, 2003). Weiss and Indurkhya (1998) states that “over many years, statisticians have developed the methods that are used daily to evaluate hypotheses and to determine whether differences can be ascribed to random chance and at the heart of statistics are models of data and models of prediction that are supported by formal statistical theory” (p.12). However, classical statistical models, dominated by linear models, are now seen as models for modest, not big, data. Artificial intelligence (AI) is all about making machines think and behave like humans. AI is an area that circumscribes other areas like logic and philosophy. The developments in AI have had a significant impact on various technology areas and intelligent systems such as operating systems, database systems and networks. AI has now evolved into various areas such as machine learning, knowledge management, expert systems, intelligent systems, data mining and knowledge discovery (Thuraisingham, 1999). As a broad subfield of artificial intelligence, machine learning is concerned with the development of algorithms and techniques that allow computers to learn. Machine learning can be more accurately described as the union of statistics and AI. On the contrary of statistics, machine learning is well suited to big data and more powerful computing. This innovative prediction method often produces strong empirical results (Weiss and Indurkhya, 1998). Machine learning is all about learning rules from data. Essentially, machine learning techniques such as neural networks, genetic algorithms, decision trees and inductive logic programming are the ones that are used for data mining today (Thuraisingham, 1999). Data mining is best described as the union of historical and recent developments in statistics, AI, and machine learning. These techniques are then used together to process data and find hidden trends or patterns within. For a long time we have been accustomed to the fact that there are tremendous volumes of data filling our computers, networks and lives. Government agencies and businesses have been allocating enormous resources to collect and store this data. Nonetheless, only a small amount of these data will ever be used since in many cases the volumes are simply too large to manage effectively (Westphal and Blaxton, 1998). Data mining have been developed to overcome the obstacles mentioned above. In this context, data mining is the process of finding trends and patterns in data. This process aims to sort through large volumes of data and discover new information. The advantage of data mining is gathering actionable results from this data, such as increasing a customer’s likelihood to buy (Groth, 2000). According to Westphal and Blaxton (1998), “data mining is an iterative process within which progress is defined by discovery through either automatic or manual methods” (p.6).
A Study on E-Resource Needs for the Dept. of Hospitality Management Students
Mu-Chen Wu, Hungkuang University Library and Chung Hua University, Taiwan
Professor Ling-Feng Hsieh, Chung Hua University, Hsinchu, Taiwan
University libraries have undergone major changes in terms of information collection, production, conveyance, and use following information and communication technology development in recent years. As stated in many literatures, E-resource is the most frequently used, and preferred type of information for teachers and students from different departments. E-resource covers: Online full-text and retrieval database, E-journal, E-newsletter, E-reference resource, and internet resource etc. Hospitality Management is one of the most popular courses at the moment. It is also the core service industry in the country. In recent years, Dept. of Hospitality Management has been established in colleges and universities in the nation. It has also become one of the development focuses for many schools. On the part of these schools, considerable amounts of resources have been put tin to enhance the teaching environment, and maintain advantageousness when competing against other schools. The author has devoted in university library education for over 10 years. He has worked as the supervisor for the university library for the past 10 years. He not only taught students of Dept. of Hospitality Management on library resource applications, he also played the role of a consultant during resource collection. Therefore, based on the E-resource needs of students of Dept. of Hospitality Management, analysis and studies are conducted in this research through questionnaire surveys and statistical methods. Hopefully, the students’ actual E-resource needs will be found. E-resource related data is expected to be collected for students of Dept. of Hospitality Management in the future, which shall serve as reference during decision-making. University libraries have undergone major changes in terms of information collection, production, conveyance, and use following information and communication technology development in recent years. As stated in many literatures, E-resource is the most frequently used, and preferred type of information for teachers and students from different departments. E-resource covers: Online full-text and retrieval database, E-journal, E-newsletter, E-reference resource, and internet resource etc. Hospitality Management is one of the most popular courses at the moment. It is also the core service industry in the country. Since the establishment of Vocational High School in the Food and Beverage Management Department in 1986, many specialists in Food and beverage management have been trained. In 1995, National Kaohsiung Hospitality College was established. In recent years, colleges and universities in the nation have setup Hospitality Management courses one after another, and have actively engaged in cultivating specialists in the hospitality industry. In order to enhance the service quality in the industry, many graduate institutes similar in nature have been established. It has become a development focus for schools. Considerable amounts of resources have also been put in to improve the teaching environment, and promote special department features. In the face of fierce competitions from other schools, it is the only way to gain competitive advantage. The author has devoted in university library education for over 10 years. He has worked as the supervisor for the university library for the past 10 years. He not only taught students of Dept. of Hospitality Management on library resource applications, he also played the role of a consultant during resource collection. He firmly believes that information for teachers and students alike, having right E-resource needs, and the retrieval competence are important issues of concern. In this research, we aim to study students’ E-resource use in Hospitality Management areas in order to find out the students’ actual E-resource needs in the Dept. of Hospitality Management. In the future, we hope to provide students and teachers of Dept. of Hospitality Management with the best information added services. Specifically, the main study purposes are as follows: 1. Conduct survey on Dept. of Hospitality Management students’ E-resource use and future needs. 2. Enhance Dept. of Hospitality Management students’ E-resource use level. 3. E-resource related data is expected to be collected for students of Dept. of Hospitality Management in the future, which shall serve as reference during decision-making. 4. Provide libraries with directions for E-resource service improvement. The study scope covers various E-resources available inside an Institute of Technology. The study subjects include first year, second year, and fourth year students of day class, and night class who have agreed to take questionnaire surveys. However, third year students are not part of the study scope as they were on internship training at the time. Therefore, they are not listed as part of the survey samples in this research.
Workers’ Job Satisfaction and Organizational Commitment: Mediator Variable Relationships of Organizational Commitment Factors
Dr. Haluk Tanrýverdi, Sakarya University, Turkey
The objectives of this study are to: 1) examine the relationships between workers’ job satisfaction and the affective and continuance commitment dimensions of organizational commitment; 2) to decide whether a mediator variable exists between job satisfaction and organizational commitment factors; 3) and to understand the influence of demographic factors such as the workers’ age, educational background, job position in the organization, and gender on job satisfaction and organizational commitment. To examine the relationships between job satisfaction and organizational commitment, this research surveyed 595 people working at small, medium, and large organizations operating in the Marmara region in Turkey. Data collected are analyzed using SPSS 15.0 and are evaluated by being subjected to factor analysis, reliability analysis (Cronbach’s alpha), regression, and correlation analysis. The more a worker takes pleasure in his/her work, the more satisfaction he/she will get from his/her work. On the other hand, dissatisfaction indicates that the worker is not content with the organization’s reward policies and level of organizational development. The main difference between the concepts of job satisfaction and organizational commitment can be summarized as: “I love my job” and “I love the organization I work for.” While job satisfaction relates to the worker’s attitudes toward his/her job, organizational commitment relates to the worker’s attitudes toward the organization for which he/she works. Despite job satisfaction, it is thought that organizational commitment occurs slowly, and is not affected by the daily flow of the job; whereas job satisfaction is more easily influenced by workflow. Organizational commitment expresses how bonded the worker feels toward the organization. It is believed that organizational commitment affects organizational performance positively; in this framework, organizational commitment decreases undesired traits such as late arrivals, absenteeism, and turnover. Important factors that inspire people to work are accomplishing expectations as a result of their exertion, and that they will be happy in proportion to the extent that these expectations are fulfilled. Generally, longings and needs are closely related to the individual’s own self (Eren, 2001:501). Job satisfaction is “the emotional reaction a worker has toward his/her job after a comparison of the outputs he/she expects or desires with real outputs” (Cranny et al, 1992, p.1). Job satisfaction can be defined as an attitude that affects an individual’s behaviors as well (Miner, 1992, p.116). Job satisfaction is the pleasure the worker takes from the job or job experience and the positive emotional state that occurs as a consequence. Job satisfaction can be obtained only when the characteristics of the job comply with the worker’s expectations, because it comprises the worker’s expectations from the job with the prospects (rewards) the job provides. It is closely related to equality theory and psychological agreement. Regardless of his/her rank, each worker has a range of experiences concerning his/her job, his/her organization, and his/her work atmosphere by the end of his/her career. During the working years, individuals have joys and sorrows that they see, experience, and obtain. From this very accumulation of thoughts and feelings, their attitude toward their job and their organization are formed. Because a person’s attitude toward his/her job can be either positive or negative, it would be right to define job satisfaction as “the positive psychological state occurring consequent to the person’s work experiences,” while calling the worker’s negative attitude toward his job as work dissatisfaction (Erdoðan, 1996, p.231). The essence of job satisfaction lies in physiological, psychological, and social needs. Personal needs make up the most important aspect of workers’ satisfaction. Nevertheless, work choice, the work itself, its place, physical conditions, its kind, the level of knowledge it calls for, its goals, its wage, relationships among workers, and safety are counted as the most important variables that affect satisfaction. Motivation theories of job satisfaction have highly influential impacts. Some of these are Maslow’s hierarchy of needs, Herzberg’s two-factor theory, Wroom’s expectancy theory, and Lawler and Porter’s distributive justice theory (Eren, 2001:495-536). Job satisfaction is a subjective evaluation of the job conditions (job itself, supervisor) or the outcomes obtained from the job (wage, job security). Job satisfaction is composed of internal responses an individual develops as a reaction to their understanding of the job as well as job conditions that are formed by passing through their norm, value, and expectation systems (Schneider & Snyder, 1975:31). In this respect, job satisfaction is the workers’ understanding about the job, the benefits of work, and the emotional response they show to this understanding (Luthans, 1995). Though there are different approaches to job satisfaction, all these approaches are of the opinion that the concept of job satisfaction should be handled multi-dimensionally (Bell &Weaver, 1987). Oshagbemi (2000) defines job satisfaction as an emotional response that occurs as a result of the interaction between the worker’s values concerning his/her job and the profits he/she gained from his/her job .According to another opinion, job satisfaction is depicted as the positive or positive reinforcing emotional state that grows out of one’s job. In Schermerhorn et al.’s description, job satisfaction is the extent of the workers’ positive or negative feelings about their job . A person’s emotional response to his/her job, to the physical and social conditions, indicates to what extent his/her expectations from his/her job are satisfied (Schermerhorn, Hunt, & Osborn, 1994:144). In many research studies, job satisfaction is described as a complex phenomenon that can be defined by multiple variables, and these variables are classified as emerging from the person; namely, personal variables and job atmosphere (Scarpello & Vandenberg, 1992:125; Simon, 1996:38). The first of these factors are those that relate to job atmosphere and the job itself. How the individual is treated; the characteristics of the duties given; relationships with other colleagues; and rewards can all be mentioned here. Second are the characteristics of individuals and their former lives. Gathered thusly into two groups, these variables affect job satisfaction by interacting with each other.
Large Firms & Small Firms: Job Quality, Innovation and Economic Development
Dr. Richard Judd, University of Illinois at Springfield, IL
Dr. Ronald D. McNeil, University of Illinois at Springfield, IL
Economic development strategies and methods must change. Why? Competition for new plants or companies to locate in communities no longer comes from other communities, counties or states. Competition for plants and companies has become global in today’s flat economic landscape. Globalization of services and production along with markets for goods, capital, services and currencies impacts decision-making for all companies. However, within the United States, most federal programs for economic development are written for the economy of the 20th century, not that of the 21st century. In order to successfully compete in the global environment, some experts are abandoning traditional approaches to economic development. Rather than relying solely on recruiting large firms with tax breaks, financial incentives and other inducements, more progressive economic development experts are beginning to extend efforts to support the growth of existing enterprises and to promote the practice of building businesses from the ground up. The 21st Century Economic Development Model has three complementary features which were not part of the 20th century approach to economic development. The three features of the 21st Century Economic Development Model are: (1) development and support of entrepreneurs and small businesses; (2) expansion and improvement of the infrastructure; and (3) development or recruitment of a skilled and educated workforce. These new approaches are founded upon improved education from kindergarten through higher education; infrastructure development by the community, region, state, and country; creation and maintenance of an attractive business climate; and improvement in the quality of life within a community. The over-riding reason for the change in approach to economic development is clear: experience demonstrates that economic development strategies for attracting large firms are unlikely to be fruitful and, even if successful, may come at a great cost. The new “vision” is to support the innovative prowess of entrepreneurs and small businesses so that these developing ventures can produce new jobs for the community. Historically, entrepreneurs began small companies with one or two employees; however, when successful, these tiny companies grew into Ford Motor Company, Boeing Aircraft, Hewlett Packard and the like. Over time, this strategy changed into one of attempting to attract subsidiaries or plants of large firms to locate in a community but this is the strategy that is no longer working. What is occurring is a return to and refinement of the approach of the late 19th and early 20th century in the United States. The overarching question for today’s economic development experts is: Are they willing to return to a strategy that once allowed small businesses to flourish and some to become large employers? This paper addresses this issue and provides evidence in support of this 21st Century Economic Development Model. The paper will also offer recommendations for further research on the 21st Century Model, discuss whether or not public engagement in economic development itself is cost-effective, and demonstrate that economic development is an effective socio-economic pursuit. On the surface, the direct economic effects on a local economy from a large firm entering a community appear as significant gains in employment and personal income. However, the impact the large, new employer has on other firms in the area (indirect effects) may not positively affect the greater net economic impact. For example, studies show that new firm locations and existing firm expansion in Georgia, as well as location of new large firms (300+ employees), actually retarded the growth of existing firms and/or discouraged other enterprises from entering into that local community (Edmiston 2004). Further, a broad study of plant location (Fox and Murray 2004) indicates that locating a new plant with 1,000 workers adds, on average, a net of only 285 workers to the community over a five-year period. The new plant would add 1,000 jobs but drive away 715 other jobs that could have been generated or retained. The issue is that the economic impact of the new large firm is not as great as would be anticipated and the cost to attract and retain the large employer can be considerable. It is important to note that another study suggests that the net employment effect of large-firm location within a particular community may actually be closer to zero (Fox and Murray 2004). Under the traditional approach to economic development, the Midwest of the past experienced the development and growth of large companies. As Richard Longworth wrote in his book Caught in the Middle, “The Midwest reigned as the Silicon Valley of the industrial era.” Because of transportation links, natural resources, capital and human resources, many communities sought and became home to large factories. Akron, as an example, reigned as the tire capital of the United States as Goodyear, Firestone, and Goodrich were established in that community. The success of Akron, Detroit, the Chicago Region, to name only a few, employing the traditional economic development model is legendary; however, that model has now yielded the “rust belt.”. Longworth cites George Erickcek, a Michigan economist, who says that many of the cities and towns of the Midwest accepted sixty years of prosperity from the 1920’s through the 1980’s. The traditional model was so successful for so long that it did not adapt or change with globalization (Longworth 2008).
Total Quality Management Underpins Information Quality Management
Mary Levis, Dublin City University, Ireland
Dr. Malcolm Brady, Dublin City University, Ireland
Dr. Markus Helfert, Dublin City University, Ireland
The importance of quality is widely acknowledged throughout the world, not only for avoiding failure and reducing costs but also for gaining competitive advantage. The focus of this article is to reflect on two approaches of quality management that have gained popularity during the last decades: Total Quality Management (TQM) and Information Quality Management (IQM).The goal of this study is trace the roots of Information Quality to the Total Quality philosophy of the quality gurus that gained popularity in the 1960's such as Deming, Crosby, Juran, Feigenbaum and Ishikawa, and illustrate how TQM underpins IQM. Professionals rely on data to successfully carry out their work and the quality of their information source impacts on their decisions. According to , poor data quality costs the typical company from 10% to 20% of revenue. The goal of Information Quality Management (IQM) is to increase the value of high quality information assets . Poor information quality is a barrier to effective decision making. What signifies useful information for a manager may not be deemed useful information to the worker on the ground . On a daily basis the media reports on the impact of poor quality in the healthcare sector ,,,,. The traditional approach to quality predominantly focuses on the technical aspects of quality paying little attention to the soft systems (human side) of quality . This article reflects on two quality approaches that have gained popularity during the last decades: Total Quality Management (TQM) and Information Quality Management (IQM). We will attempt to trace the roots of IQM to the TQM philosophy instigated by the quality gurus, which will show how TQM underpins IQM. The rest of the paper is organized as follows: Section 2 will trace the evolution of quality and its many definitions. Section 3 defines TQM. Sections 4 outlines IQM, Section 5 shows how TQM underpins IQM. Section 6 gives a summary and conclusions. The roots of quality can be traced to the pre Industrial Revolution era, when inspection committees enforced rules for marking goods with a special quality mark as proof of quality for customers. Late in the 19th century the United States adopted a new management approach developed by Frederick W. Taylor. Taylor's goal was to increase productivity by assigning inspectors to keep defective products from reaching customers. From the literature reviewed a universal definition of quality is difficult to achieve but some commonly accepted definitions of the quality pioneers and their emphasis are outlined in figure 1. Juran defined quality as 'fitness for use'. Similarly, Crosby identified quality as meaning 'conformance to requirements'. Deming advocates 'meeting the customer' needs'. Feigenbaum states that quality is determined by customer satisfaction. Deming introduced Total Quality Management (TQM) in the 1980's with the help of other quality leaders, Juran and Crosby . TQM, can be thought of as a management philosophy, a corporate culture and an organisational wide activity fundamentally based on the participation of all members of an organization in improving processes, products and services; transforming organisational culture in order to meet or exceed customer needs and expectations, by means of consistent leadership and continuous improvement , , , . In essence, the three basic principles of TQM are: focus on customer satisfaction; seek continuous and long term improvement in all the organization's processes and outputs, and ensure full involvement of the entire work force in improving quality. TQM is always people-driven and its results are high performance team work, employee morale enhancement and a harmonious organizational climate , . Every employee has valuable and valid knowledge of how their particular job could be done better, and when these ideas are appreciated in a supportive environment then, and only then, can a working structure exist through which changes can be made as a result of ideas and suggestions, thereby, improving the quality of the final product or service. However, the entire total quality effort must be planned and managed by the company's management team . Most quality management leaders agree that the biggest ingredient and most critical issue in quality is management commitment to support employees who in turn will support the customer , . After an extensive review of IQ literature we found that the definition of information quality is also the subject of much debate , , . Data quality has many attributes and Wang and Strong (1996) outline various attributes of data quality from the perspectives of those who used the data. Data is of high quality if it is fit for its intended use , , ). However, the same database for one use could have poor data quality and for another use be considered high data quality . Data is deemed of high quality if it 'correctly represents the real-world construct to which it refers so that products or decisions can be made'. Wang and Strong proposed a data quality framework that includes the categories of intrinsic data quality, accessibility data quality, contextual and representational data quality . The 15 dimensions are detailed table 1.
Leadership Competencies in Job Advertisements
Dr. Müberra Yüksel, Kadir Has University, Istanbul, Turkey
Being in an era where old patterns no longer function, leadership as a future-oriented directing role gained even more significance in the 21st century. Lack of transparency, accountability and credibility of both financial and ethical issues as a consequence of inadequate leadership have led to numerous scandals and eroded the reputation and legitimacy of both numerous leaders as well as institutions, lately. In response to these challenges to leadership, never before has the need for leadership in organizations been so great. These issues, have demanded further look into leadership competencies as the key intangibles that leverage strategic competitive advantage and consequently, recruitment of the right leaders has become a significant challenge for all organizations, recently. Prior research have mostly focused on leadership styles and compared these styles against each other. The significance of competencies of leaders particularly in executive search and their advertising has been mostly overlooked (Jenn, 2005). Although competency models of leadership and assessments are often largely employed for recruitment along with training and development, in the latter the competencies are designed through an overhaul of conventional contextual framework and used more effectively (Naquin & Holton, 2006). While the core behavioral characteristics for an effective leader are examined extensively by the US studies (e.g., Bernthal, Paul R. & Wellins, Richard, 2006) the common occupational norms among the EU member countries (e.g., Becking, Koen & Hopman, Nikol, 2005) are also being explored based on task versus relationship oriented leadership. What leadership competencies are used for attracting and selecting the desired leaders for today’s leaders in job advertisements is the main research question. Probing into the differences between the competencies chosen for advertisements of different levels of hierarchy and if the key leadership competencies are in line with the global or regional competence norms are other aims of this study. After a literature review of transactional and transformational models (e.g., Bass, B.M. and Avolio, B.J. 1989), a content analysis of advertisements in the two major Turkish newspapers’ web-based job placement services has been analyzed for about a year to determine to what extent the leadership terminology stemming from influential leadership theories is used in marketing communication of leadership positions. In this empirical study, it has been found out that conventional task-oriented and/ or transactional competencies are still preferred more than people-oriented and/ or transformational competencies in Turkey. Probing the underlying reasons Ever since the formation of the first human communities, leadership has been omnipresent in every aspect of life, from politics to economics and from small to complex organizations. In terms of the developing countries, the notion acquired increased importance and became once again the focus of attention since the mid-90s, with increasing globalization through the opportunities presented by a new pioneering medium, the Internet. The need for people with the necessary competencies to lead new business ventures became paramount. Among much uncertainty about what this new business environment entailed, people with the capacity to see ahead with vision have become in absolute need. Over ten years have gone by since, a decade marked by the collapse of the Internet bubble in 2000–2001 coupled with a domestic economic crisis, which lead to an almost forced and unplanned consolidation of the on-line industry including the ones focused on recruitment and placement services. Within this context, the discus-sion about leadership, especially in the field of on-line human resource management activities, acquires a refreshed impetus and at the same time provides a fresh opportunity to reflect on past failures and highlight best practices for the future. Although the significance of leadership for business success is emphasized in the academic literature, the track record of business practices in selecting leaders gives the impression of a gap between theory and practice. Following Den Hartog, Caley & Dewe, I argue that investigating what are overlapping terminology concerning style, behavior, people, change, process, values (as the soft side) versus structure, cost management, strategy formulation, implementation (as the hard side) in executive position advertisements and framing them as generic leadership competencies might be helpful in explaining leadership in particular cultural contexts. The corporate governance crisis with the Enron case, the overall environment for professional services industry has been changing. Both in the US and in Europe, the (executive) search firms has been forced to come out of the closet by media and data privacy legislation or codes of conduct are spreading widely and binding most countries. With globalization, digital technology is acting as a catalyzing medium, opening up increasing opportunities for all with access to multimedia information and communication technologies. In the 1960s McLuhan has coined the term ‘global village’ and highlighted the ways in which the medium and the message act synergistically. Indeed “the global embrace” predicted by McLuhan has abolished the linear conception of time and space for Internet has transformed our way of communicating along with our way of thinking and learning. The internet can also be an efficient vehicle particularly for finding junior and midlevel candidates for managerial positions (Yüksel, 2007).
Objects Discovery in Database Systems
Dr. Qiyang Chen, NJ
The paper presents a framework of database reverse engineering processes that recover the semantic objects and relational patterns from existing relational database tables. This framework is for extracting hidden structures in order to construct a new model that benefits from object orientation and perspectives. Some major issues and strategies that occur with existing reverse engineering approaches are discussed. One problem concerns the extensive amount of information that must be gathered either automatically or from users (designers). Another problem concerns the actual state of legacy databases that may lack of original design blueprint due to various changes. The main idea is to form an intermediate schema that involves both relations and objects structures. Database systems are essential as sources of competitive advantage for organizations. These systems play a central role since they have to incorporate fast changes resulting from the evolutions that characterize nowadays business world. Many existing database systems are referred as legacy systems that are undergoing numerous updates by generations of analysts, database administrators, users, or database developers. They generally suffer from poor documentation either on original design or subsequent updates. Furthermore, uncontrolled modifications introduce inconsistencies in structures and data. They become to be troublesome since they are no longer able to effectively support new and frequent changes that are necessary to organizations to gain competitive advantages.
Asset Allocation and the Solo Practitioner
Ricardo M. Ulivi, Ph.D. and Lidia Luminita Pop, CA
This paper recommends an easy to implement, disciplined approach to asset allocation by following the asset allocation policies of CalPERS - the largest public pension fund in the United States – which manages close to $250 billion on behalf of nearly 1,500,000 individuals, and has a research staff of nearly 180 professionals. They mostly manage money for retirees, so their investment objectives will be similar to the clients of a solo practitioner. The authors of this paper have tried to examine whether a solo practitioner can match the performance results achieved by CalPERS by following their asset allocation policies and using iShares ETFs to implement these policies. The authors obtained CalPERS asset allocations since 1984, and replicated it for the period ending June 30, from 1996 to 2006. They then chose iShares ETFs, as the vehicles with which to implement the asset allocation policies. For the chosen time period, the average return for the Simulated CalPERS portfolio was 9.41% gross of fees and transaction costs. This result is nearly identical to the actual performance of CalPERS for the same period, or 9.51%. In summary, following CalPERS’ asset allocation policies will give solo practitioners a disciplined and proven approach developed by a major money manager, at no cost since the asset allocation data is available on the CalPERS website.
A Fuzzy MCDM Application for Evaluation of Factoring from the Purchaser’s Perspective
Prof. Chih-Young Hung and Yi-Hui Chiang, Ph. D Candidate, , Taiwan
Factoring is a financial service that enables companies to sell their accounts receivable to a factor in exchange for cash. The market for factoring in Taiwan has been growing at substantial rates, and most banking institutions are now actively offering the service. In this paper, we present a fuzzy multiple criteria decision-making (FMCDM) approach to factoring evaluation from the perspective of the purchaser. By evaluating the client, compliance with firm policy and the customer, the FMCDM approach is applied to investigate the A/R purchase for factoring industries in Taiwan. Examples of three alternatives are used to illustrate the process of choosing the best alternative, and we find the difference between the factoring operation and traditional credit policy in the decision-making process. International factoring in the modern, unpredictable global market could be difficult unless firms have appropriate evaluation strategies. We believe that our study provides a scientific framework for making critical decisions on factoring proposals.
The Impact of Innovation and Competitive Intensity on Positional Advantage and Firm Performance
Dr. Weijun He and Dr. Ming Nie, China
Innovation has been long considered to be a pivotal factor for establishing a competitive edge. However, there has been little empirical work with regard to the conversion of innovation into positional advantage which in turn facilitates firm performance and its contextual dependence. The paper aims to investigate the impact of innovation on positional advantage and firm performance, and to examine the moderating effect of competitive intensity. A conceptual framework is tested on the basis of developing structural equation models using data from a survey of 238 optoelectronic firms in Wuhan East Lake High-Tech Development Zone in P.R. China. The results indicate that innovation plays an important role directly and indirectly, through the creation of positional advantage, in enhancing firm performance. The findings also show that the effects of innovation on positional advantage and firm performance are contingent on competitive intensity. The author discusses the theoretical and managerial implications in light of the empirical findings.
The Role of Knowledge Management for Achieving to World-Class Manufacturing
Mohammad R. Hamidizadeh, Ph.D. and Hassan Farsijani, Ph.D.
The range of techniques associated with competitive manufacturing has expanded rapidly since the inception of MRP in the 1960s. Schonberger (1986) integrated these techniques into the generic term world-class manufacturing (WCM). The range and sophistication of these techniques place WCM status beyond the aspirations and competence of many enterprises. In order to have sufficient level on WCM Knowledge can be managed like the other production assets KM appeared in this area. Knowledge management is as an open and a dynamic system that use different feedback loops and functions to update and promote the organizational knowledge. For the most part, knowledge management efforts have focused on developing new applications of information technology to support the capture, retrieval, and distribution of explicit knowledge. The paper explores how the concept of knowledge management can be made relevant to WCM culture, through three case studies based on small to large sized manufacturing companies experiencing both rapid growth and increasing international competition. The result of this pilot study indicates that the major obstacle to implementing a WCM culture is the lack of expertise or resources in conducting the lack of employees’ understanding, education and training in carrying out the process.
A Study on Parallel Blended Learning: A Case of a Beauty Course in the Beauty Science Department of Chienkuo Technology University
Pei-Ling Wu, Chienkuo Technology University
Jaw-Sin Su, Chinese Culture University
Ching-San Chiang, Chienkuo Technology University
To extend the concept of blended learning, a combination of horizontal extended and vertical extended learning can be used. In this study, we developed the parallel blended learning model, as different resources may be used at one time to teach by a blend of traditional learning. The course related to beauty was developed by using three teaching methods: sample demonstration, theory, exercises and demonstration, and a combination of all three. The curriculum plan, the execution of that plan, and the evaluation of the effect and achievement would be given. To prove that the parallel blended learning model can be used in a university course, a second year course in the Beauty Science Department of Cheinkuo Technology University in central Taiwan was used as the example of this case study. The title of this course was Beauty Performance. It was determined that the most efficient model of teaching was the course taught by a team of teachers, with an all-in-one beauty center.
Copyright 2000-2017. All Rights Reserved