The Journal of American Academy of Business, Cambridge
Vol. 15 * Num.. 1 * September 2009
The Library of Congress, Washington, DC * ISSN: 1540 – 7780
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Journal of American Academy of Business, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC. The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. No Manuscript Will Be Accepted Without the Required Format. All Manuscripts Should Be Professionally Proofread Before the Submission. You can use www.editavenue.com for professional proofreading / editing etc...
The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: firstname.lastname@example.org; Journal: JAABC. Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright 2000-2017. All Rights Reserved
Ethanol Demand Growth and Related Impact on Corn and Poultry Markets
Dr. Ellene Kebede, Tuskegee University, Tuskegee, AL
Dr. Curtis Jolly, Auburn University, Auburn, AL
Giap V. Nguyen, Auburn University, Auburn, AL
This paper used the Muth model to exemplify the market linkages among the corn, ethanol and poultry industries. Ethanol demand is price inelastic, but the blenders’ tax credit has a positive and significant effect in lowering the price of ethanol, increasing ethanol demand and increasing corn demand. Information from derived ethanol supply functions was used to compute the marginal effect on poultry prices of changes in ethanol production. Four different elasticities of substitution between corn and non-corn input in ethanol and poultry production were used in the estimation. The result showed that the increase in the elasticity of substitution between corn and non-corn inputs in both industries will reduce the effect on poultry prices. This research indicates a long-run need for non-corn input in both industries in order to reduce the effect on the three markets and the rest of the economy. Ethanol, known as gasohol, has been in the U.S. market since the early 1940s, but economic, environmental and political factors led to the current surge in the use of ethanol. Contributing to the surge were an increase in oil prices, local air pollution, the global climate, dependency on foreign sources, and an interest in enhancing farm incomes (Doornbosch and Steenblik 2007; de Gorter and Just 2007). Between 1980 and 2006, energy in the U.S. increased by 28 percent Petroleum accounted for 17 percent of the increase, and, by 2006, the transportation sector accounted for 67 percent of the petroleum consumed in the U.S. Crude oil imports increased from 34 percent to 65 percent between 1980 and 2007, and world crude prices increased from $13 in 1980 to $90 per barrel by the end of 2007 (Energy Information Agency 2008 and 2008a). Ethanol and biodiesel, the two major biofuels that can blend with and be substituted for petroleum, were non-existent as transportation fuels in the 1980s, but they increased to 375 percent by 2004 (Energy Information Agency, 2008). With some government assistance, ethanol had been in the U.S. market for a long time, but it could not compete with the low price of gasoline (Tyner, 2007). The two main factors contributing to the increase in the demand for ethanol are the banning of Methyl Tertiary-butyl Ether (MTBE) as an oxygenate and the Renewable Fuel Standard (RFS) of the Energy Policy Act of 2005. The 1990 Clean Air Act Amendment (CAA) mandated the use of cleaner-burning fuels to reduce emission in large, congested cities. To meet the EPA emission standards oxygenated gasoline was introduced to boost the octane level and because it burns faster than gasoline and with fewer emissions. RFG requires a minimum of 2-percent-by-weight oxygen content, which requires 5 to 7 percent volume ethanol. The various tax incentives certainly helped the ethanol industry in the US to get off the ground (Library of Congress). The Energy Policy Act of 2005 amended the Clean Air Act and established a National Renewable Fuel Standard (NRFS) program to ensure that, beginning in 2007, gasoline sold in the US contains a minimum volume of renewable fuel (USEPA 2007). The NRFS program from 2007 to 2012 sets forth a seven-year phase-in of renewable fuel volumes, beginning with 4 billion gallons in 2006 and reaching 7.5 billion gallons in 2012 (Renewable Fuel Association 2007). The policy provides a subsidy in the form of a volumetric tax credit of $0.51 per gallon for ethanol blenders, a $0.10 per gallon producer’s credit for ethanol plants producing less than 60 million gallons per year, and an income tax deduction for fuel-flexible vehicles. In addition, most states have some form of incentive to encourage ethanol producers and blenders, which benefits ethanol and corn producers both directly and indirectly. A sweeping renewable-fuels standard was proposed as part of the Bio-fuels Security Act of 2007, which required the production of 10 billion gallons of renewable fuels by 2010, 30 billion gallons by 2020, and 60 billion gallons in 2030 (Ugarte et al. 2007). Ethanol can be produced from grain or cellulose-based feedstock. The current commercial ethanol production in the US is mostly grain-based (mainly corn), and cellulose-based ethanol is targeted to reach the commercial stage by 2010 (Aden et al. 2002). The bulk of the ethanol production and consumption is located in the major corn-producing areas of the Midwest and California, where the number of plants grew from 50 to 110 between 2000 and 2007 and the volume increased from 1.6 to 6.5 billion gallons per year (Renewable Fuels Association 2008). Meeting the NRFS is likely to shift the use of corn and create a structural change in crop production mix and land use in the agricultural sector. Increased corn-planting means using pasture as cropland, reduced fallow, acreage returning to production from expiring commodity reserve program contracts, and shifts from other crops, such as cotton (Westcott 2007). Corn will compete for land with other crops as well, including soybeans and sorghum, and the use of corn for fuel will compete with its use for feed in the livestock and poultry industry. The resulting increase in the price of animal feed will likely have an impact on retail prices.
Determinants of Perceived Customer-Centrism in Managing Information About Customers
Dr. Joseph S. Mollick, Texas A&M University-Corpus Christi, Corpus Christi, TX
The Determinants of Movie Video Sales Revenue
Dr. Neil Terry, West Texas A&M University, Canyon, TX
Dr. Anne Macy, West Texas A&M University, Canyon, TX
This paper examines the determinants of movie video sales in the United States. The sample consists of 165 films released during 2006. Regression results indicate that the primary determinants of video sales are domestic box office, rental revenue, time from box office to video, sequels, children’s movies, restricted rating, and Academy Award nominations. Specific results include the observation that domestic box office revenue serves as a complement to movie video sales, while rental revenue is a substitute. Children’s movies are worth twenty to twenty-five million dollars more in video sales than other releases, because parents appear to prefer buying over renting children’s movies. Time to video is inversely related to video sales because motion picture companies take advantage of marketing economies of scale by quickly moving films from the box office to the video market. Domestic box office release exposure, foreign box office performance, and budget do not appear to have a statistically significant impact on movie sales revenue. The average budget of making a motion picture for release in the United States has risen to almost fifty million dollars per movie. This rising cost has resulted in motion picture studios seeking multiple sources of revenue including domestic box office, foreign box office, product placement, merchandising, video sales, and video rental revenue. A single movie can be the difference between millions of dollars of profits or losses for a studio in a given year (Simonoff & Sparrow, 2000). The purpose of this research is to analyze the motion picture industry with a focus on the determinants of movie video sales revenue. This manuscript is divided into four sections. First, a survey of the related literature is discussed. The second section provides the model specification. The third section puts forth an empirical evaluation of the determinants of video sales revenue for 165 films released during the year 2006. The final section offers concluding remarks. The literature on the determinants of video sales revenue is in its infancy but expected to be highly correlated with the determinants of box office revenue. Many researchers have developed models that explore the potential determinants of motion picture box office performance. Litman (1983) was the first to develop a multiple regression model in an attempt to predict the financial success of films. The original independent variables in the landmark work include movie genre (science fiction, drama, action-adventure, comedy, and musical), Motion Picture Association of America rating (G, PG, R and X), superstar in the cast, production costs, release company (major or independent), Academy Awards (nominations and winning in a major category), and release date (Christmas, Memorial Day, summer). Litman’s model provides evidence that the independent variables of production costs, critics’ ratings, science fiction genre, major distributor, Christmas release, Academy Award nomination, and winning an Academy Award are all significant determinants of the success of a theatrical movie. Litman and Kohl (1989), Litman and Ahn (1998), and Terry, Butler, and De’Armond (2004) have replicated and expanded the initial work of Litman. None of the extensions of Litman’s work has focused on the video sales revenue market. One area of interest has been the role of the critic (Weiman, 1991). The majority of studies find that critics play a significant role on the success or failure of a film. Eliashberg and Shugan (1997) divide the critic into two roles, the influencer and the predictor. The influencer is a role where the critic will influence the box office results of a movie based on his or her review of the movie. Eliashberg and Shugan’s results suggest that critics do have the ability to manipulate box office revenues based on their review of a movie. The predictor is a role where the critic, based on the review, predicts the success of a movie but the review will not necessarily have an impact on how well the movie performs at the box office. Eliashberg and Shugan show that the predictor role is possible but does not have the same level of statistical evidence as the influencer role. King (2007) explores the theoretical power and weakness of critics on the box office performance of movies. The substantial market power of critics is derived from the following: (1) Film reviews are widely available in newspapers, magazines, and websites. The ubiquitous availability of critical reviews in advance of a movie release creates positive or negative energy in the critical opening weeks; (2) Film critics regard themselves as advisors to their readers. They are often as explicit in their recommendations as Consumer Reports is about other consumer purchases; and (3) Film critics are likely to be considered objective. There are too many critics and too many films for serious critical bias to develop. Those who are skeptical about the influence of film critics point to the following counter arguments: (1) It is possible that the effects of aggressive marketing at the time of a film’s release might dominate critical evaluations in determining opening attendance; (2) Critics may raise issues that do not concern most audiences. They are more likely to notice and comment on technical issues, like cinematographic technique, than the average member of the audience; and (3) Critics may write for a readership that has different tastes from the average cinemagoer. The most obvious potential reason for this is demographic. Cinema audiences are younger than the general population and less likely to pay attention to print reviews. Critics might therefore, be expected to aim their reviews at the older demographic audience and give relatively negative reviews to certain film genres. The empirical results put forth by King (2007) are mixed with respect to the impact of critics on box office earnings for the U.S. box office in 2003. He finds zero correlation between critical ratings for films and gross box office earnings when all releases are considered because of the affinity critics have for foreign movies and documentaries relative to the general public. For movies released on more than 1,000 screens, critical ratings have a positive impact on gross earnings.
Changes in the Disclosure Regulatory Environment and Managers’ Timing of Earnings Announcements
Dr. Kirk L. Philipich, University of Michigan – Dearborn, Dearborn, MI
Managers’ reaction to changes in GAAP has been of interest to accounting researchers for decades. Prior accounting research has documented that managers react to changes in GAAP by changing their investment strategies (e.g. economic consequences) and by changes in their disclosure strategies (e.g. early adoption). This paper investigates if changes in the legal environment may also impact managers’ disclosure strategies. One particular disclosure strategy of interest to accounting researchers is the perceived relationship between earnings news, good or bad, and how quickly this information is released to investors. The timing of earnings announcements based upon the type of earnings news, while appealing, may not always be in the best interest of managers much less investors. This paper examines this managerial behavior over an extended time period during which a landmark legal case set standards for the timing of the release of corporate information (SEC vs. Texas Gulf Sulphur) and a lesser known case that spoke specifically to the release of bad earnings news (Financial Industrial Fund vs. McDonnell Douglas). The results show that before the implications of these cases were known, a strong relationship existed between the timing of the release of earnings information with the type of earnings news being released. Through time, as the ramifications of these legal cases on the disclosure legal environment became well documented and known by corporate managers, this relationship weakened and eventually completely dissipated. The implication of this result is that accounting research may need to be more cognizant of changes in the legal environment when examining when and how managers release information to investors. Givoly and Palmon , Chambers and Penman , and Kross and Schroeder , are three of the earliest investigations into mangers’ discretionary timing of earnings announcements. While the evidence from this research is mixed, a belief that mangers release “good news” earnings reports quickly while delaying the reporting of “bad news” earnings reports became widely accepted (this managerial behavior is referred to as the announcement timing effect). This largely descriptive research pays little attention to identifying and/or testing rationales for this behavior. The current study reexamines the announcement timing effect within the context of two landmark legal actions that impacted managers’ latitude in releasing information to the public. These two precedent setting legal actions, SEC versus Texas Gulf Sulphur and Financial Industrial Fund versus McDonnell Douglas, served as warnings to managers that to not report information as they become aware of it is ill-advised at best, and potentially illegal and/or very costly. Therefore, given that the implications arising from these lawsuits were well known throughout the legal and corporate communities, it follows that managers should have limited or discontinued their reporting of earnings in a fashion consistent with the announcement timing effect or face potentially severe consequences. In other words, subsequent to these legal actions, the number of managers willing to follow this earnings reporting behavior should decrease, making it less prevalent. The announcement timing effect is predicated on the belief that managers, realizing that they have good (bad) earnings news to report before observing the market’s reaction, will release this news earlier (later) than expected. The empirical tests employ a seasonal random walk model to measure the unexpected earnings news that managers are about to report. This measure is selected because it is available to managers before their decision to release earnings information. This measure of unexpected earnings is associated with the number of days between the quarter-end and the earnings announcement date. Dummy variables are used to detect any change in this relationship across time as the ramifications of the SEC versus Texas Gulf Sulphur and the Financial Industrial Fund versus McDonnell Douglas legal actions redefined the corporate responsibility of managers to report material information in a more timely fashion to the public. The empirical tests reveal a very significant relationship between the reporting lag and the sign and magnitude of unexpected earnings in the time period preceding these legal actions. This relationship weakens as earnings announcements in time periods during which these lengthy legal actions are occurring are examined. This relationship, between the reporting lag and unexpected earnings, becomes insignificant when earnings announcements subsequent to the conclusion of the legal actions are examined. With respect to the announcement timing effect, the evidence strongly supports the existence of the announcement timing effect before the ramifications of these legal cases are known. However, it appears that a gradual decline may have taken place in the likelihood that managers report earnings in a fashion consistent with the announcement timing effect. These results suggest that as managers became aware of changes in the judicial/legal environment they appear to have changed their earnings disclosure behavior. The primary studies that find support for the announcement timing effect yield three results of particular interest to the current study (Givoly and Palmon , Chambers and Penman , Kross and Schroeder , and Penman ): (1) Managers manipulate the timing of earnings announcements leading to a pattern of good news earnings reports being released very quickly while earnings reports containing bad news are delayed, (2) The stock market tends to react negatively when earnings reports are later than usual, and (3) The average number of days between the end of a quarter and the earnings announcement date, the reporting lag, decreased over time. Several concerns still exist with respect to the announcement timing effect (result 1). First, very little formal theory exists concerning the manipulation of the reporting of earnings by managers. The informal theory that exists concerning the timing of the release of earnings information argues both for and against the announcement timing effect. Thus, this remains a somewhat open question. Secondly, two of the results from previous research (results 1 and 2) are not complementary. If the market reacts as if earnings are bad when the earnings announcement is delayed (result 2), it would seem that managers are not fooling market participants by delaying a bad earnings report. If the earnings announcement is late, the market anticipates the worst and reacts negatively. In either case the inevitable occurs, bad earnings news results in a negative market reaction. Thus, managers gain very little by attempting to delay the inevitable: If a firm’s earnings are below the market’s expectation, then their stock price will fall if earnings are reported immediately or not.
The Utilization of a Succession Plan to Effectively Change Leadership and Ownership in a Small Business Enterprise
Kevin McNamara, Chairman, Millington Lockwood, Inc.
Dr. John G. Watson, St. Bonaventure University
Dr. Carol B. Wittmeyer, St. Bonaventure University
The paper traces the development of a succession plan with the owner of a small business enterprise and the potential buyer of the organization to succeed the original owner in running the enterprise. The buyer was a former employee of the organization with other experience in the industry. This unusual agreement culminated with an exchange of all the outstanding shares of stock from the original owner to the new owner after a specified period of time. The cultural change that took place during this time period is presented. The major benefits and learnings of the succession plan are discussed. Finally, the authors present a succession planning model for a small business enterprise. Millington Lockwood, Inc., is a certified Herman Miller Company furniture distributor located in Western New York and Western Pennsylvania. Millington Lockwood, Inc., was founded in Buffalo, New York, in 1884 by Mr. Millington Lockwood. It is one of the oldest companies in the region. In the fall of 1970, Mr. Kevin McNamara went to work for Millington Lockwood, and in 1971 purchased the company from the then current owner, Mr. Russell Koen who retired from the company. Mr. McNamara’s initial investment was $12,500. He financed $10,000; he had $500 in cash and $2,000 from the proceeds of the sale of his automobile. Under the tutelage of Mr. McNamara, the company grew significantly over the years and sales increased from $600,000 to a level of $7.5 Million in 1992. That year Mr. McNamara decided to sell Millington Lockwood, Inc. He queried his two daughters as to whether or not they wanted to be involved in the company. Both daughters informed him they were not interested in succeeding their father in the company. After interviewing several other candidates he then turned to a former trusted employee, Mr. Michael Bonitatibus, who was returning to Western New York after being employed by Herman Miller, Inc., for eight years as a territory and regional manager in Boston. Prior to his tenure at Herman Miller, Inc., Mr. Bonitabus was employed for three years at Millington Lockwood, Inc. as a salesman. Mr. McNamara felt very comfortable with Mr. Bonitatibus and made a determination that Mr. Bonitatibus possessed the core values consistent with Mr. McNamara’s core values to continue the succession of Millington Lockwood, Inc., into the future. Mr. Bonitatibus was skilled at building relationships with people. He possessed excellent interpersonal skills, was a hard worker and was eager to learn the total operation of Millington Lockwood. Herman Miller, Inc. is the major manufacturer that Millington Lockwood, Inc., represents. Millington Lockwood’s distribution area was considered to be one of the smaller territories in the Herman Miller family. Mr. McNamara had the opportunity to interact with Herman Miller executives, and a unique opportunity to better understand the direction of this global corporation. Herman Miller, Inc. has been the leader in the design and manufacture of innovative products in the contract furniture industry for over thirty years. They have traditionally been known as having a participative management style, and have been considered to be one of the best companies to work for in the United States. In addition, they have been recognized throughout the world for their environmental concerns. In 2002, Herman Miller's corporate office facility renovation received Gold LEED (Leadership in Energy and Environmental Design) Green Building certification, only the 10th Gold standard awarded nationwide. The American Institute of Architects (AIA) included Herman Miller's headquarters among its Top Ten Green Building Projects. That same year, Herman Miller opened the MarketPlace, a leased, built-to-suit office facility near its Main Site headquarters in Zeeland, Michigan. Designed as an open, airy, and people-focused structure that also incorporates a number of environmentally sound principles, the MarketPlace soon became a "must-see" destination for customers, dealers, and others visiting Herman Miller in Western Michigan. In 2003, The Herman Miller MarketPlace received Gold LEED Certification. At the time, it was one of less than a dozen buildings nationwide to achieve that distinction. (REF: http://www.hermanmiller.com/CDA/SSA/Timeline/0,1589,a7-c1153-tl11,00.html) As the company built new facilities in Europe and Asia, they were also recognized for their commitment to building environmentally friendly facilities. Not only did they expand operations internationally, they also became recognized leaders in other industries including health care. As Mr. McNamara was not ready to retire in 1992, he decided to develop an agreement with Mr. Bonitatibus to purchase Millington Lockwood, Inc., effective June 30, 2002. This represented an extraordinarily unusual agreement as it indicated the agreement would culminate with an exchange of all the outstanding shares of stock from Mr. McNamara to Mr. Bonitatibus effective after ten years. A contract was developed between Mr. McNamara and Mr. Bonitatibus wherein it stipulated Mr. McNamara would remain as Chairman of the company for an additional six year period. The written contract represented a succession plan for the enterprise. Mr. Bonitatibus was well aware of and respected the relationship that was critical in achieving mutual performance objectives between Millington Lockwood and Herman Miller. The intent of the ten year succession plan was to increase the financial position of the company over that period of time so as not to weaken but continue to grow the net worth of the company. Aside from the obvious solvency responsibility, the ten year transition plan would assure the long-range success of the company as the agreement provided for all debts to be paid and for $2,000,000 to be available at the end of the ten year period. This would allow Millington Lockwood, Inc., to adjust to an ever-changing marketplace. Mr. McNamara and Mr. Bonitatibus were aware that with a ten year plan a holistic view would provide a better initial approach for understanding and responding effectively to the dynamic considerations in the workplace. Both recognized that to meet the short and long-term objectives for the company, the most challenging and controllable needs over the next ten years would involve changing a number of factors including: the culture of the organization; fostering teamwork and participation among employees; integrating technology into the organization; providing quality service to customers; continuing the profitability of the company; and growing the relationship with Herman Miller, Inc.
International Off-shoring: The Changes in the World Economy
Dr. Fred Maidment, Western Connecticut State University, Danbury, CT
Public policy responses by the major industrialized countries of the United States, the United Kingdom, and Europe to the off-shoring of jobs to developing countries are examined. The differences and the consequences of these policies are discussed. Public policy responses and the importance of these responses to the developed world and to the workers, citizens and, from the perspective of the elected politicians, voters, is emphasized for the future wellbeing of these societies. How the leaders of developed countries choose to deal with the rapidly changing developments in a hyper-competitive, technological, global economy will have great impact on the future success of those societies and the people they represent. For decades the manufacturing sector of the US economy has been sending jobs to less costly locations. Jobs left the Northeast and the upper Midwest for the South in the textile, auto and other industries. Now the textile industry has almost completely left the US and the auto industry is becoming so globalized that it is hard to tell where a car was made. Jobs that can be easily learned in a few minutes on an assembly line will always be in danger of going to the lowest bidder. It does not take very long for a relatively unskilled worker in a third world country to equal the level of proficiency of a much more expensive employee working in a developed country to do jobs requiring little or no skill. Technology in the workplace has changed many things. It has allowed workers to become far more productive, but it has also enabled much of the work to be done outside the developed world in locations in the third world. Jobs that were once reserved for workers with high levels of training and education in the developed world, may now be performed by workers with those same levels of education and training in the developing world. In the service sector/high technology industries, back-office functions are being done by workers in the developing world. This is especially true in the information technology sector of the economy, as well as the financial and other service sectors (Flannery, 2004). There are other forces in play. Developing countries, for the first time have critical masses of highly skilled, well-educated workers who are capable, of performing complicated and advanced tasks at very high levels (D’Costa, 2003). Until very recently, these types of tasks could only be performed by individuals in advanced societies, because only in advanced societies could the workers with the necessary skills be found. However, today, that is not the case. To a large degree, this is the result of over fifty years of student exchanges between the US and Europe and the developing world, specially China and India. The US and Europe welcomed the first exchange students to their universities to study at the close of World War II and others followed. A few stayed, however, many of them went back to their homelands where they became the leaders and the teachers of their societies. Several generations have gone by, and these countries have established universities that produce engineers, accountants, computer programmers and other well trained and educated potential employees, many of whom are the equal to those graduated by institutions in the United States and Europe (Promfret, 2003).These universities in the developing world have educated a significant number of highly qualified workers, and, as is well known, costs in the developing countries are very much lower than that of the United States, or other developed countries. Potential employees are paid significantly less than American, Japanese, or European counterparts. Multinational corporations are knowledgeable about the differences in labor costs and the abilities of their employees. They are employing those abilities on a wide variety of tasks. Smaller organizations may also utilize this option through the option of “third-party provider”. A practice that is especially common in information technology (Sullivan, 2004). The United States has faced the prospect of loosing jobs to an economically underdeveloped society with highly skilled and educated workers before. Both Germany and Japan were completely devastated at the end of World War II. But, they still had a large number of highly skilled, and trained workers. While their traditional forms of capital had been virtually destroyed by the bombing and other aspect of the war, their human capital, while damaged, had more or less survived. With the assistance of the United States, these two countries were back on their feet in less than a decade, producing goods and services that were often competitive, and in twenty-five years, giving American industry a real challenge. The continental European approach to off-shoring has been to be more protectionist in the area of labor. This is because the unions, the government and the corporations in Europe have had a much closer historic relationship (Wallace, 2003). Unions in Europe are and were far more politically involved than their counterparts in the United States. There are countries in Europe where political parties are known as “Union Parties,” or “Labor Parties” and these political parties tend to be more aligned with traditional liberal/working issues. The relationship between the unions and management is often not nearly as adversarial as it is in the United States, and it is not unheard of for a representative of the union to sit on the board of directors of a major corporation in continental Europe. Continental European corporations find it difficult to remove employees, as they have many protections against termination. As a result, corporations in continental Europe hire new employees only when it is absolutely necessary, and employees do not leave once they have a job. There are some unintended consequences that have resulted from this:
Lake Area Tourism: Making the Most of Your Day and Overnight Visitor
Dr. Annette Ryerson, Black Hills State University, Spearfish, SD
Small tourism destinations are feeling the crunch of tightening pocket books. It is necessary for businesses to be aware of have viable market segments in order to service This paper explores the travel habits and expenditures of 973 survey respondents from a Midwestern lake area in South Dakota. Segmentation techniques and core competencies were utilized to determine the most marketable segments for small businesses. Much like other lake area communities, it is believed that this research can be applied to other similar tourist destinations. Unlike package and resort vacations, this research takes place in a recreational lake area where it is possible to spend quality time with friends and family while enjoying camping, boating, fishing and many other activities and not necessarily spending a lot of money. This study seeks to identify the different segments of visitors to the recreational lake area. Nine hundred and seventy three individuals were willing to share their opinions, behaviors and expenditures with regard to group they traveled with. This compilation of data has provided insight into the minds of the lake area traveler. Recent research has indicated that service marketing is more complicated to manage than actual product marketing, due to the intangibility, variability, inseparability and perishability of the product. (Appiah-Adu, Fyall, and Singh 2000). This study will focus upon the local influences of travel and how proper market segmentation can assist the community and local businesses in proper targeting of these segments. It is anticipated that a study of this nature will assist other small communities in their quest for enhanced tourism marketing. When considering a flat economy that is currently hampered by rising interest rates and rising gas prices, it is important to strategically segment the tourist population. “Comprehending visitors’ behaviors is of immense importance in developing actionable service and destination-specific marketing strategies. Destination promoters and tourism researchers commonly exploit segmentation techniques to gain rich insights into the eccentricity of sub-segments.” (Chen and Uysal 2003). 2006 Research indicates the urgent need for a consistent marketing plan for small destinations. As stated, “Recent states of crisis have clearly shown that tourism is a fractured but highly interdependent industry. The pooling of resources has become especially important for small tourism bureaus, which experience enormous pressure on their already limited funds.” (Gretzel, Fesenmaier,Formica and O’Leary 2006). Recent statistics state a decrease in tourism in states like North Dakota and Montana. Other similar Midwestern and western states need to maintain a clear focus on their competitive advantage in order to push ahead. (Senge,Scharmer, Jaworski and Flowers 2004.) As opposed to using the traditional method of “gut feel” destination organizations are becoming more interested in basing their beliefs on methodology. This further reinforces the purpose of this and other similar research. There are two distinct methods of segmentation which are used. A priori market segmentation occurs when researchers decide at the onset the segments they will profile, whereas posteriori market segmentation utilizes clustering techniques to determine underlying segments. Dolnicar (2004) believes that a combination of a priori and post hoc segmentation will provide more original approaches to segmenting travel markets and producing solutions that have not been explored by competing destinations. Dolnicar (2004) stresses that an analysis of the market, in combination with a thorough market segmentation strategy, leads to a competitive advantage. Yankton, which is home to the Lewis and Clark Recreation Area, is located in “Tourism Region 1” of the state of South Dakota, in the southeastern portion of the state. (South Dakota Travel Monitoring System 2006) Yankton borders on the states of Iowa and Nebraska, with the majority of travelers coming from the bordering states. As one of the largest bodies of water available to visitors from southeastern South Dakota, southwestern Minnesota, northwest Iowa and northeast Nebraska, the recreational areas along the Missouri River near Yankton hold virtually endless possibilities for year-round travel, recreation and tourism. Annual visitation and campground occupancy rates at the Lewis and Clark Recreation Area (LCRA), clearly document that individually and collectively, they have the ability to draw over one million visitors, year after year. This is a remarkable accomplishment in an ever changing tourism environment where many state and national parks are experiencing declining visitation rates. The Lewis and Clark Recreation Area (LCRA) is located four miles west of downtown Yankton, SD. Key features include a 400 unit campground with cabins, paved bike trails, equestrian trails, the state’s largest full-service marina, a marina restaurant with fine dining, a resort with heated outdoor pool and playground, plus swimming, boating, hunting, fishing and more.
Selected Technological Advances and Organizations: Applications and Implications
Dr. Vivek Shah, Texas State University-San Marcos, San Marcos, TX
Dr. Kamlesh Mehta, Peace College, Raleigh, NC
When we think of the technological advances of the 21st century, we wonder how they affect the world around us. From an organization’s point of view, the increased use of technology raises the fundamental question: what are the impacts of technology on the organization? This study examines the financial impact of the internet, the impact of electronic monitoring, the impact of virtual teams, and the impact of Internet security on organizations. The organizations must tailor the use of technology to suit specific needs and purposes. The advances in the Internet and the World Wide Web will continue to change the world around us and the ways organizations conduct their operations in the future. When we think of the technological advances of the 21st century, we often wonder how they affect the world around us. The advances in the Internet and the World Wide Web will continue to change the world around us and the ways organizations conduct their operations in the future. The growth of the World Wide Web has changed the stream of actions and flow of processes in an organization. From an organization’s point of view, the increased use of technology raises the fundamental question: what are the impacts of technology on the organization? The impact the technology has on organizations is both positive and negative. Of the special interest to an organization are the applications of the current technological developments. Therefore, by examining the financial impact of the Internet, the impact of electronic monitoring, the dynamics of virtual teams, and the impact of internet and data security, this study will illustrate the benefits and drawbacks that technology provides to the modern day firms. The Internet is a powerful, influential resource and a robust tool if properly used and leveraged and can allow organizations to flourish in a financial capacity. Three positive financial aspects the Internet provides for organizations are increased revenue due to an additional sales channel, cost-efficient and innovative marketing stream, and savings on overhead expenses. While the Internet brings increased financial hope to many companies, it can also affect the bottom line and decrease a company’s overall profit. Some of the negative financial effects caused by the web include increased costs for fraud prevention, start-up/on-going technology costs, and the expensive need to be flexible in an ever-changing Internet world. Organizations need to ask, “Does the Internet affect my business?” According to Elliott (2003), the answer is a resounding yes. The Internet provides companies with the opportunity to sell their goods and services worldwide. This allows for a global market and more than a domestic audience. The Internet is a place where organizations can provide detailed information about their products and services and allow customers to purchase these products twenty-four hours a day, seven days a week. Instant access and convenience to shopping positively affect an organization’s finances. The web allows for businesses all over the world to profit from online customers. An estimated 13.9 million households (57%) in Great Britain could access the Internet from home between January and April 2006, according to the National Statistics Omnibus Survey. This is an increase of 2.9 million households (26%) since 2002, and 0.6 million (5%) over the last year (National Statistics, 2006). Also, the internet penetration in United States has grown to 64% in 2006 (Internet Retailer, 2006). These figures are continuing to rise and make it increasingly important for businesses to maintain Internet presence. Recently, the fashion company, Christian Dior, set up their online store. In terms of sales, the online site is already the “size of a free-standing store” (Passariello, 2006). The Internet has the ability to offer organizations a new source of revenue via sales and a higher profit margin. The opportunities for business-to-business transactions exceed the opportunities in the business-to-consumer e-commerce arena. According to the U.S. Census Bureau E-Stats report, B2B e-commerce totaled $2,716 billion representing 93% of online sales in 2006. Most of the remaining 7% was in the form of business-to-consumer (B2C) e-commerce (Carey, 2008). Cisco Systems is the worldwide leader in networking for the internet. By using networked applications over the internet and its own internal network, Cisco, is seeing financial benefits of nearly $1.4 billion a year, while improving customer/partner satisfaction and gaining a competitive advantage in areas such as customer support, product ordering and delivery times. Cisco is today the world’s largest internet commerce site, with 90% of orders transacted over the web (Torrance, 2001). A second influential factor of the Internet for organizations is that it provides a means to market to their audience through building product awareness and brand recognition. Using the Internet can be a cost-effective way of marketing. Large numbers of people are reached for a much lower cost than by using conventional marketing methods (Hanson & Kalyanam, 2007). Using the Internet for marketing purposes saves organizations money. This technique reaches a global or targeted audience through low cost web advertisements such as banners, pop-up ads, and mass e-mail blasts. A well-designed website will save organizations money, especially in the area of marketing.
Exploring Drivers in the Adoption of Mobile Commerce in China
Jun Zhang, Huazhong University of Science and Technology, China
China, though ranking first over the world in terms of mobile phone users, suffers a small number of Mobile Commerce (MC) consumers. This study explores how Chinese consumers are influenced to adopt MC. We employ a revised Technology Acceptance Model (rTAM) to examining factors affecting Chinese consumers’ attitudes toward this emerging mobile technology and applications. The proposed model was empirically tested using data collected from a survey of mobile phone consumers. Our survey indicates that consumer perceived ease of use (PEOU) influence attitude toward using (ATU) MC. It is also found that consumer’s past adoption behaviour, educational level, age, gender, and occupation affect their adoption behaviour. The majority of positive relationships between PU, PEOU, ATU, past adoption behaviour, and demographics are sopported by the empirical data. Our research also supports the applicability of TAM to examine MC adoption by Chinese consumers and validates the robustness of TAM to study new technologies outside U. S. context. The rapid development of modern wireless communication technology, coupled with the increasingly high penetration rate of the Internet, is promoting mobile commerce (MC) as a significant application for both enterprises and consumers (Lucas and Spitler, 2000). MC refers to commercial transactions conducted through a variety of mobile equipment over a wireless telecommunication network in a wireless environment (Gunsaekaran and Ngai, 2003). Currently, these wireless devices include two-way pagers/SMS (short message systems), wireless application protocol (WAP)-equipped cellular phones, personal digital assistants (PDA), Internet-enabled laptop computer with wireless access capacity, and consumer premise IEEE 802.11(a/b) wireless network devices (Leung and Antypas, 2001). MC applications can be broadly divided into two categories: content delivery (i.e., reporting, notification, consultation) and transactions (i.e., data entry, purchasing, promotions) (Balasubramanian et al., 2002). With the explosive growth of the mobile telephone population, combined with the development of wireless technologies, MC is becoming increasingly important to many businesses nowadays (Hung et al., 2003). Although many analysts have predicted that MC will become another mainstream business application, following electronic commerce, others have expressed some reservations about its return-on-investment (ROI) potential. For example, a survey among 1,205 U.K. companies across 15 sectors has found that 65% of firms do not plan any MC strategy in the near future (Thomas, 2003). Smith (2001) finds that many factors may hinder consumer usage of MC, including cost of access (35%), credit card security (33%), difficult navigation (11%), and low access speed (9%). China now has the largest mobile communication network in the world. In June 2007, the number of mobile phone users reached 600 million, about 1/5 of the world. But at the same time, the number of active WAP users is only about 3.9 million, which means that only one out of 150 mobile phone users in average often browse wireless Internet via their phones for shopping, entertainment, sports news and movies, or financial information (CNNIC, 2007). Mobile commerce is still a “new” technology for most Chinese users. Although there have been many recent publications that discuss various marketing issues related to MC technologies and applications (Balasubramanian et al., 2002), only a few scholars have attempted to explain factors influencing the adoption of MC. As such, this study aims to investigate why consumers decide to adopt MC. This study will examine the nature of MC and assess its prospects and potential. Moreover, the study will also seek to bring MC into the Chinese context and focus on the possible development of MC in this country. As the MC industry in China is still very much at an early stage of development and is rapidly evolving, we hope that such findings will be constructive to its development and growth. Businesses venturing into the MC marketplace may also find this study useful in gaining insights into consumer adoption behaviour. Various disciplines have considered the adoption process of new information communications technologies. These include communication (Rogers, 2003), consumer behaviour (Gatignon and Robertson, 1985), economics (Kraemer et al., 1992), and information systems research (Knol and Stroeken, 2001). Nevertheless, the Technology Acceptance Model (TAM), first proposed by Davis (1989) and later validated by many other researchers in a variety of academic disciplines, provides one of the most parsimonious, yet robust, models in explaining consumer adoption behaviour of new information technologies. Our research is based on a revised version of the original TAM mode (Pijpers et al., 2001). The revised model postulates that consumers actively evaluate the usefulness and ease-of-use of information technology in their decision making process. Moreover, two variables (BI and AU) are removed from the revised model while adding demographic characteristics variables that are more pertinent to examine the emerging MC technology. The model modification and variable selection are based on the circumstance that MC application is still at its early stage in China and actual consumer system usage is limited. Therefore, actual system use (AU) will not be a valid measure for the present study. Similarly, behavioural intention to use (BI) is often overestimated in self-report study because no perceived risk or financial consequences are involved.
Business vs. Leisure Travelers: Their Responses to Negative Word-of-Mouth
Dr. Young “Sally” Kim, Shenandoah University, Winchester, VA
Despite the importance of word-of-mouth, very little is known about how existing customers react to negative word-of-mouth (NWOM). This study examines whether and how existing customers respond to NWOM. Using the concepts developed in the existing literatures, this study examines the roles of attribution, trust, and purpose of travel in customers’ responses to NWOM. The study tests the hypotheses using a self-administered mail survey of customers on their lodging experiences. Customers’ attribution (i.e., attributing negativity of the information to the company vs. the communicator) is found to be significant. Trust is also found to have a significant influence on customers’ intentions to spread positive WOM following a negative WOM incident. Finally, the study found business and leisure travelers respond to NWOM differently. Word-of-mouth marketing, buzz marketing, viral marketing, referral marketing---All these jargons appear frequently in newspapers and business magazines, as well as academic journals, pointing to the trend that companies actively pursue promotion of customer-to-customer communication about their products and services. Effective communication with customers has become a daunting task over the years because of media fragmentation, (e.g., numerous cable TV options, the Internet), a large volume of messages presented to customers on a daily basis, a number of choices (brands) available to customers, and consumers’ busy life style (e.g., little time to spend in front of TV). Faced with this challenging communication environment, many companies started to realize a need to leverage the power of word-of-mouth (WOM). Information disseminated through WOM is usually perceived as more credible and powerful than the company’s own advertising message (Traylor and Mathias 1983). Thus, some companies (e.g., Sony-Ericsson) have employed a creative marketing communication program, in which company-hired actors pretend as real consumers of the product and lure potential customers (passersby on a busy street) in an attempt to influence their perceptions of the product (e.g., Sony-Ericsson cellular phone with camera, Vespa) (Business Week 2001). These companies engage in such an act because they believe WOM is effective at acquiring new customers and expediting product diffusion. Recognizing the importance of WOM, researchers have examined issues related to positive WOM ranging from antecedents (e.g., altruism, satisfaction, mavenism) and consequences (e.g., product adoption) of WOM, the relationship between satisfaction and WOM behavior, and the relationship between advertising and WOM to identification of profiles of opinion leaders who are likely to spread WOM (Anderson 1998; Bone 1995; Brown, Barry, Dacin and Gunst 2005; Hogan, Lemon and Libai 2004; Laczniak, DeCarlo, and Ramaswami 2001; Traylor and Mathias 1983). These studies underlie the fact that positive WOM has a significant influence on potential customers’ perceptions, attitudes, and behavioral tendencies (e.g., purchase). Studies on positive WOM seem to be extensive in scope. However, studies on negative WOM (NWOM) are somewhat limited, especially with respect to its influence on existing customers. The issue of whether existing customers’ attitude and behavioral tendencies change after they are exposed to NWOM has not been well researched. This research issue is an interesting and important one given that customer-generated messages (e.g., blogging) are on the rise and many companies have shifted their strategic focus from customer acquisition to customer retention. This study is one of the very few studies (e.g., Ahluwalia, Burnkrant and Unnava 2000) that examine the effect of NWOM on existing customers and their evaluations of the company. More specifically, the study attempts to answer the questions of how existing customers respond to NWOM and whether their responses are different depending on purpose of travel, a commonly used market segmentation variable in the travel industry. To address these issues, we examine the roles of attribution, trust, and purpose of travel in customers’ behavioral intentions to spread positive WOM about the company following an exposure to NWOM. Hypotheses were developed based on existing literatures and tested using data collected via a self-administered mail survey. The study makes several contributions as discussed below. First, unlike some previous studies that examined the effect of NWOM on potential customers, this study examines its influence on existing customers. Not many companies have a good understanding of how their current customers respond to NWOM, although this information is critical to customer retention. The results will offer important implications as to how companies should allocate resources to monitor and manage customers’ NWOM activities. Second, whereas most previous studies ignored attributional process a receiver (i.e., a customer who is exposed to WOM) is engaged in, the current study investigates the role of attribution in customers’ perceptions of the message. Understanding customers’ attribution process will help companies to identify the type of NWOM messages that are more destructive and to design effective communication strategies to minimize the damage. Finally, by examining the role of purpose of travel and understanding how it influences customers’ behaviors following a negative WOM incident, this study will suggest how companies can develop effective communication strategies.
Roadmap of Co-branding Positions and Strategies
Wei-Lun Chang, Tamkang University, Taiwan
Co-branding, is a marketing arrangement to utilize multiple brand names on a single product or service. Basically, the constituent brands can assist each other to achieve their objectives. Co-branding is an increasingly popular technique for transferring the positive associations of one company’s product or brand to another. In the absence of a clearly defined strategy, co-brand mergers are frequently driven by short-term goals to mistrust and failure. In this paper, we identify critical factors of a successful co-branding strategy, co-branding position matrix, and co-branding strategies respectively. We also utilize certain real-world cases in order to demonstrate our notions. Finally, this research aims to provide clues and a roadmap for future research in co-branding issues. Co-branding, is a marketing arrangement to utilize multiple brand names on a single product or service. Also, co-branding can be seen as a type of strategic alliance between two parties. Basically, the constituent brands can assist each other to achieve their objectives. Obviously, creating strategic alliances by engaging in co-branding has become increasingly popular across many industries. A successful co-branding strategy has the potential to achieve excellent synergy that capitalizes on the unique strengths of each contributing brand. Co-branding is an increasingly popular technique for transferring the positive associations of one company’s product or brand to another. In other words, creating synergy with existing brands creates substantial potential benefits of various kinds. As Gaurav Doshi notes in a recent 2007 article, such synergy: (1) expands the customer base (more customers), (2) increases profitability (3) responds to the expressed and latent needs of customers through extended production lines), (4) strengthens competitive position through a higher market share), (5) enhances product introductions through enhancing the brand image (6) creates new customer-perceived value, and (7) and yields operational benefits through reduced cost. The philosophy behind co-branding is to attain advanced market share, increase the revenue streams, and improve competitive advantages through customer awareness. A great deal of attention has been focused on selecting a co-branding partner-not only the essentials of the potential parties but a series of steps in selection process. Correspondingly little attention, however, has deeply been paid to the co-branding position and successful strategies. An appropriate co-branding strategy decision on brand managers has by and large tended to follow rather than focus on surface factors. Not surprisingly, such a reactive approach can be deeply damaging. At best, it results in confusion and conflict among brand managers who hold differing views regarding the co-brand mandate. At worst, it results in turf wars between rival managers about who owns the top post-merger brands. Alternatively, brand managers can find themselves caught up in appropriate type and strategies of co-brands to reach ambitious revenue targets in order to allay market fears. The result, all too often, is a single winner strategy that seriously compromises customer expectations, employee morale, and long-term competitiveness. The most damaging response, however, is to do nothing at all, allowing the pre-merger brands to go their separate ways. The expected synergies of a co-brand can easily turn into a nightmare if the co-brand located in the inappropriate position and strategy. In the absence of a clearly defined strategy, co-brand mergers are frequently driven by short-term goals to mistrust and failure. A recent report claims only once in five brand mergers succeed (Riiber et. al., 1997). Clearly, something more strategic than an adaptive response is needed to harness the market potential of a merger. A clear co-branding strategy is critical in both directions: positioning a new brand align their efforts behind a common set of goals and finding an appropriate co-branding tactic to create a win-win situation. Such a strategy should take into account the core competences and goals of both firms. It should be also defined a suitable co-brand architecture, in other words, the desired relationship between brands within merged portfolio. Clear decisions on the four key aspects of co-branding position－coalition, coordination, collaboration, and cooperation－provide a robust underpinning for successful co-brand mergers. Once the essentials of a co-branding strategy are clear, five critical factors emerge for a successful co-branding strategy. This can be referred to as a 5C co-branding strategy (Figure 1). These factors can assist a company in organizing a successful and appropriate co-branding strategy from a macro perspective. It’s important to consider the transition costs for two companies embarking on a successful co-branding strategy. For the joint venture type, the two companies have the same responsibility for both profits and liabilities (e.g., Sony and Ericsson). Thus, the transition cost for both parties is symmetric. But in the merger type, one party (e.g., BenQ) must take responsibility for the other (e.g., Siemens). BenQ merged with Siemens and had to provide constant financial support. Unfortunately, BenQ’s pockets just weren’t deep enough to absorb the cost of turning around the profit-losing Siemens unit. The cost for both parties was thus asymmetric. The general lesson: the transition costs of co-branding seriously affect the future for the companies involved.
The Effects of Frequent Commercial Exposures on Affective and Cognitive Response of Chinese-American Audience
Dr. Fu-Ling Hu, Hsing Wu College, Taipei, Taiwan
Chao Chao Chuang, Hsing Wu College, Taipei, Taiwan
Frequency of TV advertising exposure is one of the crucial determinants of increasing brand recall and awareness. However, from consumers’ point of view, as the quantity of advertising rises, the attitude of the audience toward the advertising message also worsens and leads to tedium. TV commercial is one of the most effective vehicles to communicate with Chinese-American target audience. Thus, with raising TV advertising costs, it is important to measure the commercial effects and investigate the consumers’ responses toward frequent commercial exposures. In the first part, the author discusses the research problem in terms of TV commercial issues related to Chinese-American audience. The second part reviews the literature related to consumers’ responses toward frequent advertising stimuli. Finally, the paper concludes the literature review and proposes implications, as well as directions for future research. In the past few decades, most research studies related to advertising effects used two types of psychological responses, affect and cognition, to measure consumers’ attitudes toward advertisements and purchase intentions. A large number of research studies have been made either from affective or cognitive system measuring individual differences when exposed to a stimulus. Even if the impact of cognitive and affective interrelated processing style had been made, it only focused on examining the consumers’ responses to advertisements in general (Ruiz and Sicilia, 2004). However, previous research rarely deeply discussed how these two systems interdependently influence consumers’ behaviors toward advertisements for a particular type of medium or target market. In fact, no studies have ever tried to investigate Chinese-Americans’ affective and cognitive response to frequent TV commercial exposures. Many of the current studies examine the effects of frequent commercial exposures on responses in general that may not correctly represent a specific advertising medium on a target audience. Due to the intimate relationship between affect and cognition, this research reviews two response systems simultaneously to examine consumers’ responses toward frequent commercial exposures. Most recent scholars reserve the term “ affect” to describe an internal feeling state (Cohen, Pham, & Andrade, 2006). Affective responses usually involve emotions, feelings, and moods (Olson & Peter 2005). Consumer attitude toward commercial is an example of affective response. Cognitive responses contain knowledge, meanings, and beliefs; brand recall or awareness is a cognitive response, for instance. Consumer behavior researchers have pointed out that individual differences among media and message recipients may lead to wide variations in the manner in which people respond to appeals (Moore, Harris, & Chen, 1995). When selecting media the marketing manager is interested in assessing the effectiveness of advertising exposure frequency, the degree to which the target market will see an advertising message in a specific medium. However, this paper only focuses on the study of the impact of TV advertising on Chinese-American audience. The year 2000 Census showed that the Chinese-American population has grown exponentially during the last decade, with a total estimated population of 3 million representing 25 percent of the total Asian population in the United States (U.S. Census Bureau, 2006). In particular, the importance of Chinese-Americans in the consumer market is getting more and more attention. Demographic data also showed that among the Chinese-American population, well over 80 percent of this unique group still prefer to speak their native language and rely on in-language media; television remains to be the principal medium that influences them in making purchasing decisions (ETTV America, 2006). Kotler and Armstrong (1999) stated, “the media habits of target consumers will affect media choice” (p. 458). Nowadays, TV advertising has become one of the most effective media between manufacturers and consumers in Chinese-American market. Commercials play an important role in selling every product imaginable over the years, from household products to goods and services, and to political campaigns. Television commercials upon the viewing public have become so pervasive, since most advertisers who target on Chinese-American audience claim that it is impossible to run a successful marketing campaign without airing a good television commercial. Thus, understanding the TV advertising effects on consumers’ attitudes toward purchase intention is the key to attract Chinese-American segment. In addition, the report of Census 2000 also indicated that Chinese-American segment of the population is one of the most affluent groups in the country in terms of annual purchasing power, making it an especially important consumer segment for retailers and marketers nationwide. Little attention has been given to the point, so it is significant to investigate how frequent advertisement exposures can positively affect consumers’ purchase behaviors. Nevertheless, existing research has very limited exploration of the commercial effects on Chinese-American target market. From the marketer's point of view, such question is not just academic; it also translates to a very practical issue. For example, should advertisers spend the entire advertising budget in a specific type of medium? How many exposures would give the commercial the best effect on influencing purchase behavior? In other words, it is not known how consumers respond to multiple commercial exposures.
Factors Affecting M&A Success: A Starting Point for the Topic Renaissance
Gabriele Carbonara, Parthenope University of Naples
Caiazza Rosa, Parthenope University of Naples
Despite a continuing increase in the number of mergers and acquisitions (M&A), it has been argued that there is an insufficient theoretical understanding of the elements that affect success of M&A process. This paper contributes to research on mergers and acquisitions by developing an integrative framework of the merger and acquisition process evidencing main elements that lead merged firms to achieve a competitive advantage. In a perspective of renaissance and renewal of this complex topic, the paper proposes a framework, based on classical sources and well established literature, that improves and perfects previous knowledge identifying the main factors affect firms’ ability to create distinctive competencies through M&A. This framework resuming the most delicate and important aspect of M&A process that affect the success of this operation is a starting point for a renaissance and renewal of this complex topic. The ambition of the merger and acquisition framework is nothing less than to identify factor affecting competitive advantage derived from the process of combination of two businesses and provide guidance to managers for avoiding to destruct instead of create value. Unordinary events, such as M&A, expose firm to the high risk of uncertainty depending on difficulties to combine two entities with different structure, processes, procedures, systems and cultures. Despite the empirical evidence that, on average, mergers fail to create value for the acquiring firm’s shareholders, corporations continue to employ this strategy at ever-increasing rates. These findings raise the question of which factors affect M&A process’s success? At the aim to give an answer to this question we resume previous literature on M&A process trying to identify main factors that affect each phase of the complex process of M&A impacting on its success. In a perspective of renaissance and renewal of this complex topic, the paper proposes a framework, based on classical sources and well established literature, that improves and perfects previous knowledge identifying the main factors affect firms’ ability to create distinctive competencies through M&A. In this way this article represents a starting point for a renaissance and renewal of this complex topic. Starting from the assumption that competitive advantage can flow at a point in time from the ownership of portfolios of idiosyncratic and difficult to trade assets and competencies (RBT), we identified distinctive competences’ creation as the main goal of M&A process. Based on this assumption and using well established literature we try to realize a framework that evidences main elements of M&A process affecting creation of distinctive competences. We use the process of creation of competitive advantage based on distinctive competence as evaluation criteria of M&A success instead of using brief period financial indicators of performances. Factors affecting the process of distinctive competences’ creation from the combination of two different firms seams to be an interesting field of analysis that affecting the success of this complex unordinary operation explain the reason cause firms decide to merge. Whilst the difficulty in making M&A succeed has been traced back to an inadequate strategic rationale and a lack of pre-deal evaluation, researchers seem to agree that it is the post-deal implementation phase that presents the greatest challenge (Jemison and Sitkin 1986, Morosini 1998, Child et al. 2001). The challenges of post-acquisition integration have been studied from the perspectives of managing the integration process (Haspeslagh and Jemison 1991, Ranft and Lord 2002) and understanding the impact of differences in organizational and national cultures (Sales and Mirvis 1984, Nahavandi and Malekzadeh 1988, Buono and Bowditch 1989, Cartwright and Cooper 1992, Véry et al. 1996, Weber 1996, Véry et al. 1997). At the aim to cover the theoretical gap existing in literature, this paper aims to contribute to M&A research by identifying an framework on M&A process that identify factors affecting distinctive competences’ creation in M&A process. This integrative framework points to the critical importance of strategic and structural factors that affect the process of recombination of combined firms’ competences that lead firm to achieve a competitive advantage. The Resource-Based View of the firm (RBV) is an influential theoretical framework for understanding how competitive advantage within firms is achieved and how that advantage might be sustained over time (Barney, 1991; Prahalad and Hamel, 1990). RBV assumes that firms can be conceptualized as bundles of resources, that those resources are heterogeneously distributed across firms, and that resource differences persist over time (Penrose, 1959). Based on these assumptions, researchers have theorized that when firms have resources that are valuable, rare, inimitable, and non substitutable (VRIN attributes), they can achieve sustainable competitive advantage by implementing fresh value creating strategies that cannot be easily duplicated by competing firms (Barney, 1991).
Leadership and the Future: Gen Y Workers and Two-Factor Theory
Arthur M. Baldonado, Ph.D.
Janice Spangenburg, Ph.D.
Today’s workforce is as diverse as ever. This paper explores the motivational needs of Gen Y and their impact in the workplace based on Herzberg’s two-factor theory of motivation. The participants consisted of Gen Y students at the University of Hawaii. The author used a researcher-developed, written survey as research methodology. The findings of the study revealed that Gen Y cohort placed great importance to both hygiene and motivator factors in their motivational needs. Growth and personal life were both important to Gen Y students. Managers must be flexible in their managerial approach to Gen Y workers. Millennials, Echo Boomers, Generation Y (Gen Y), and Nexters are some of the descriptors used to identify and label the newest generational cohort entering the workforce (Dulin, 2005). Referred to as Gen Y in this study, Gen Y workers (individuals born after 1980) have become a stronger and larger group in the workplace with more than 29 million members entering the workforce in the last seven years (Martin & Tulgan, 2001). Known as “Nexters” (Schlichtemeier-Nutzman, 2002, p. 35) because they are the next wave of employees and “Echo Boomers” (p. 36) because they are similar to their Baby Boomer predecessor’s size, Gen Y began entering the workplace during the summer of 2000. Gen Y, born after 1980, is 81 million strong, comprising 30% of the current United States population (Dulin, 2005). With a very focused and involved Boomer parents, Gen Y grew up with busy schedules—sports, music lessons, and scheduled play-dates occupying much of their time. Gen Y has always had input in family decisions because their parents constantly communicated with them (Lancaster & Stillman, 2002). In the workplace, Gen Y appears to be more idealistic than Generation X, but a little bit more realistic than Baby Boomers. Researchers describe Gen Y as “considerably more optimistic and more interested in volunteerism than Generation X” (Schlichtemeier-Nutzman, 2002, p. 49). Global communication and access to instant information via the World Wide Web have influenced the beliefs and expectations of Gen Y and has directly transform Gen Y’s attitudes toward work, work ethics, values, job expectations, and overall job satisfaction (Martin & Tulgan, 2001). Schlichtemeier-Nutzman (2002) noted that the scope Gen Y’s potential impact is still being studied as they have begun entering the workforce. This paper is stimulated by the findings of one of the author’s dissertation study in exploring the workplace motivational and managerial factors affecting Gen Y. The researched focused largely on exploring the motivational factor’s affecting Gen Y s based on Herzberg’s two-factor theory model. As Gen Y employees continue to become a stronger and larger group in the workplace, managers and executives must develop flexible and varied managerial behaviors to effectively motivate and manage this cohort. Thus, conducting research on generational differences, similarities, and needs are essential if managers are going to be equipped “with the knowledge required to make informed decisions and implement strategies for creating environments that people want to become part of and stay in” (Legault, 2002, p. 4). Failure to address generational issues may cause misunderstandings and miscommunications (Smola & Sutton, 2002). Bridging the generation gap between cohorts is vital if organizations are to thrive in the future. The majority of leaders in organizations overlook generational diversity. Sixty-six percent of leaders within organizations surveyed by the Equal Opportunity Employment Commission (EOEC) indicated that they had no age profile information in their workplace while eighty-one percent of leaders within those organizations failed to include cross-generational issues in their diversity training (Dulin, 2005). Nevertheless, understanding generational diversity will improve the competitive edge of an organization, increase recruitment, and retention, and ultimately create a stronger organization. Conversely, intergenerational conflict can have a catastrophic impact on morale and productivity, and it has the potential to lead to EEO complaints and lawsuits (2005). Furthermore, becoming familiar with and understanding the emerging workforce should be a priority for both researchers and practitioners (Dulin, 2005). In agreement, Kunreuther (2003) wrote that younger employees might be motivated and challenged in ways different from earlier generations. Frederick Herzberg (1968) developed the motivator/hygiene theory or two-factor theory. Herzberg began his research in the mid-1950s by surveying 200 engineers and accountants for framework around their motivators (Wagner & Hollenbeck, 2001). By combining his findings with other researchers using different frameworks, Herzberg developed a model of motivation on the assumption that factors eliciting job satisfaction and motivation are independent from those producing job dissatisfaction.
The Determinants of Noneconomic Factors Affecting Economic Growth
Dr. Sontachai Suwanakul, Alabama State University, Montgomery, AL
The Scope of economics has broadened considerably in recent decades. Recently economists have reached a consensus that successful explanation of economic performance must go beyond narrow measures of economic variables to encompass political and social forces. Culture can affect economic outcomes through economic preference. Religious beliefs affect the economy by fostering personal traits. While stronger religious beliefs stimulate growth because they help sustain individual behavior that enhances productivity. High trust is conducive to economic growth. Economic growth is influenced by many factors including culture, tradition, religion, and ethnicity. However, economists typically neglect the influences of these factors; though they realize the fact that successful explanations of economic growth must go beyond narrow measures of economic variables to encompass cultural and social forces. Classical economists, such as Adam Smith, John Stuart Mill, Karl Marx, and Max Weber, were comfortable in using cultural explanations for economic phenomena. Unfortunately, since 1930s the institutional economics has gradually given way to the neoclassical theory of general competitive equilibrium, which formalized the analysis of idealized competitive markets. From the perspective of general equilibrium theory, nonmarket factors, such as culture and religion, were not phenomena of intrinsic interest. It is because the fact that the notion of culture is so broad and the channels through which it can enter economic discourse so ubiquitous that it is difficult to design testable, refutable hypotheses [Manski, 2000: Guiso, it el., 2006]. In the decades immediately after World War II, as economic theory increased its mathematical sophistication and the set of tools at its disposal expanded, no need was felt to introduce additional potential explanatory variables, especially those hard to measure. Not only did economics lose interest in its relation with culture, but as economics became more self-confident in its own capabilities, it often sought to explain culture as a mere outcome of economic forces. In recent years better techniques and more data have made it possible to identify systematic differences in people’s preferences and beliefs and to relate them to various measures of cultural legacy. These developments suggest an approach to introducing culturally-based explanations into economics that can be tested and may substantially enrich our understanding of economic phenomena. Since 1970s, economists seek to broaden its scope while maintaining the rigor that has become emblematic of economic analysis. Economic theorists need to know what classes of social interactions are prevalent in the real world. Otherwise, theory risks becoming only a self-contained exercise in mathematical logic. Major theoretical developments in microeconomics, labor economics, and macroeconomics have played important roles in launching this new phase. According to the American Heritage Dictionary, culture is defined as the totality of socially transmitted behavior patterns, arts, beliefs, institutions, and all other products of human work and thought characteristic of a community or population. The British anthropologist Sir Edward Burnett Taylor introduced the term culture as scientists use it today. In his book Primitive Culture (1871), Taylor defined culture as “that complex whole which includes knowledge, belief, art, morals, law, and any other capabilities and habits acquired by man as a member of society”. Taylor’s definition includes three of the most important characteristics of culture: (1) Culture is acquired by people, because it consists of learned patterns of behavior rather than the biologically determined ones that are sometimes called instinctive. As children grow up, they learn culture from the people around them. The process by which people become part of their native culture is called enculturation. (2) A person acquires culture as a member of society: Culture includes ways that the members of a society relate to one another. Human beings could not deal with one another unless culture defined what to do and expect. (3) Culture is a complex whole that social scientists can break down into simple units called cultural traits. Most large groups have a set of cultural traits that meet the group’s needs and ensure its survival. Such a set of traits can be called culture. Sociologists and anthropologists have accumulated a wealth of field evidence on the impact of culture on economic behavior. In recent years, economists have begun to apply their analytical frameworks and empirical tools to the issue of culture and economic outcomes. However, economists define culture in a sufficiently narrow way in order to identify a causal link from culture to economic outcomes. For instance, Guiso et al. (2006) define culture as those customary beliefs and values that ethnic, religious, and social groups transmit fairly unchanged from generation to generation. While this definition is not comprehensive, it focuses on those dimensions of culture that can impact economic outcomes. In addition, by restricting the potential channels of influence to prior beliefs and values or preferences, this definition provides and approach that can identify a causal effect from culture to economic outcomes.
Optimization of Multi-Currency Holdings for Multinational Corporation
Dr. Ken Hung, Professor, National Dong Hwa University, Taiwan, R.O.C.
Yu-Ching Ho, National Dong Hwa University, Taiwan, R.O.C.
For multinational corporations, holding multi-currency becomes necessary to fulfill daily operation. The objective of this study is to build a new multi-currency model which can optimize foreign currency holdings to minimize total costs and increase capital investing efficiency. The ARMA-GARCH model was used to forecast foreign exchange rate to meet the real situation. Finally, this study uses GA to solve an optimal order quantity problem of foreign currency for the local multi-national corporation. The results can provide the decision makers of multi-national corporations to optimize multi-currency holdings. With the growth of international business, more and more companies hold foreign currencies as part of their working capital. As for multinational corporations, holding multi-currency also becomes necessary to fulfill daily operation. Holding foreign currency can avoid loss from volatile exchange rate and reduce transaction costs. However, it also creates potential opportunity costs from returns may be earned from other investments. How to optimize multi-currency holdings to minimize total costs becomes an important task for multinational corporations. Bessembinder (1994) and Chakrabarti (2000) indicated that holding a foreign currency position imposes opportunity costs and exchange rate risk. Lin and Chen (1997) used the Hamidi and Bell’s (1982) inventory model to analyze data of a multi-national corporation. Their results showed that the bank's U.S. dollars demand is normal distributed and the over holding of U.S. dollars results in inefficient capital usage and higher holding costs. To build a new multi-currency model, the ARMA-GARCH model is used to forecast the foreign exchange rate to meet the real situation. The generalized autoregressive conditional heterosedastic autoregressive (GARCH) model is now one of the most widely used model to forecast the time-varying volatility observed in many financial returns especially in stock returns, interest rates and foreign exchange market (Bollerslev, 1986; Engle, 1982). Genetic algorithm is used as optimization techniques for decision-making problems. It may be an attractive optimal tool for the managers who can design their own cost function for determining the order quantity. According to previous literatures, GA has been already applied in the EOQ inventory model (Braglia and Gabbrielli, 2001; Mondal and Maiti, 2002; Stockton and Quinn, 1993). This study is based on Hamidi and Bell’s (1982) single-item inventory control model to construct the optimal foreign currency position under the objective of minimizing holding cost. The transaction data of foreign currencies is non-public and hence difficult to obtain. Therefore, this study used the U.S. dollar, Japanese Yen, and Hong Kong dollar as sample data which come from a local branch of a multi-national corporation in Taiwan. The remainder of this paper is organized as follows: Section 2 describes the multi-item foreign currency model we use in this study. Section 3 shows the results and analysis. Section 4 does the conclusions. In this study, based on the Hamidi and Bell (1982) single-item inventory model, a multi-currency model will be built under the minimum total holding cost in order to solve the optimal multi-currency order quantity problem. In addition, the ARMA-GARCH model is used to forecast the foreign exchange rate to be the constraint of this multi-item foreign currency holdings model. Finally, this study uses GA by applying Evolver 4.0 software package to solve the optimal order quantity problem. This study attempts to modify the Hamidi and Bell (1982) single-item inventory model and build a multi-currency model. In order to construct the total holding cost function of the multi-currency model, the variables are defined as follows. S：the selling order quantities or the demand quantities of foreign currency during a specific period; B：the buying order quantities or the supply quantities of foreign currency during a specific period; D = S - B：the random net demand quantities of foreign currency during a specific period; f(D)：the probability density function of D; F(D)：the cumulative distribution function of D; I：the initial holding or supply quantities of foreign currency before orders; Q：order quantities of foreign currency; Z = I + Q：current supply quantities of foreign currency during a specific period; H：total holding cost, assumed to be the interest cost; K：total shortage cost, assumed to be the opportunity cost, including the emergency order fees and the abandonment of profits because of the shortage; A：installation cost, including the setup and transportation costs, A ≧ 0;. α：unit insurance cost of ordering the foreign currency, α ≧ 0; R(Q) = A + αQ：total ordering cost when the order quantity equal to Q; and C(Z|I)：expected total holding cost function under the condition of the foreign currency supply before orders or initial holding and equal to I, and current foreign currency supply equal to Z. In this study, the total holding cost, H, and the total shortage cost, K, are defined as follows. H = h(Z - D)：total holding cost equals to unit holding cost times the excess supply quantities of foreign currency; and K = k(D - Z)：total shortage cost equals to unit shortage cost times the excess demand quantities of foreign currency, where, h：unit holding cost, assumed to be the specific unit period (e. g. daily) interest rate; k：unit shortage cost, assumed to be the specific unit period (e. g. daily) opportunity cost; Z - D：the excess supply quantities of foreign currency; and D - Z：the excess demand quantities of foreign currency. In addition, all variables above have to obey the following assumptions: 1.the random net demand of foreign currency D follows a normal distribution, i.e. D ~ N(μ,s2)。 2.“Placing an order policy” is adopted, the quantity Q of foreign currency will be ordered at the beginning of every period. Therefore, when the supply quantity before orders I and initial foreign currency supply Z are given, the expected total cost function C(Z|I) is the sum of total holding cost, total shortage cost and total ordering cost. Hence, according to Hamidi and Bell (1982), and Lin and Chen (1997), when the bank holds the foreign currency based on the variables and assumptions above, the expected total holding cost function of single-foreign-currency model can be built as
Does Trust influence Supply Chain Management?
Dr. Vojko Potocan, University of Maribor, Maribor, Slovenia
Organizations face the question of how to assure their survival and development in the global economy. Their business is important for establishing trust between inside and outside members in their work and behavior. Creation of trust is important also for supply chain management, as an area of integrated work within the frame of the organization and/or as integrated cooperation between different organizations. Supply chain management is a concept that includes the entire supply chain, from the supply of raw materials through manufacture, assembly, and distribution, all the way to the end customer. The nature of the relationships among the different linkages within the supply chain can be viewed on a continuum that goes from highly integrated at one extreme to temporary and short-term trading commitments at the other. The organization can use either the traditional or the modern approach for the formation of the supply chain. The traditional supply chain access is based on considering vertical informational connections. But for most organizations, the vertical connections are not enough, so they extend those with horizontal connections. In this frame the organizational dilemma is how to assure an appropriate level of trust, if we understand trust as a necessary capacity and ability for appropriate work and behavior of the participating members. This contribution discusses two theses: 1) How to define the role and importance of trust in supply chain management, and 2) How to improve the level of trust among supply chain members. A dominant logistics philosophy throughout the 1980s and into the early 1990s involved the integration of logistic with other functions in organizations in an effort to achieve the enterprise’s overall success (Nigel, 1996; Rushton et al., 2001; Murphy and Wood, 2004; etc.). The early to mid-1990s witnessed a growing recognition that there could be value in coordination of the various business functions not only within single organizations but across organizations as well – what can be referred to as a supply-chain (SC) management philosophy (Nigel, 1996; Rushton et al., 2001; Waller, 2003; Christopher, 2005; Slack et al., 2006). SC is not a totally new concept: organizations traditionally depended on suppliers and organizations traditionally served customers (Blanchard, 2006; Christopher, 2005; Hugos, 2006; Slack et al., 2006). On the other hand, supply chain management (SCM) can be defined as the systemic, strategic coordination of the traditional business functions and the tactics of these business functions within a particular company and across businesses in the SC, in order to improve the long-term performance of companies and the entire SC (Nigel, 1996; Handfield and Nichols, 2002; Cohen and Roussel; 2004). When we talk about SC, the modern organizational approach suggests that companies must recognize the interdependencies of major functional areas within, across, and between firms. In turn, the objectives of individual SC participants should be compatible with the objectives of other participants. To what degree objectives are realistically defined and attained, depends on the level of holism of thinking, decision making and action. Trust is the central and important question in modern discussion about SCM (Potocan, 2002; Potocan, 2006). Trust presents a necessary base for the division of work and formation of new ways to accompany and direct (and connected) work of organizational members within the frame of SCM. In our contributions we define (and research) trust as a value and as a competence. This contribution discusses two questions about trust and SCM. First, we discuss how to define the role and importance of trust in supply chain management. Second, we discuss how to improve the level of trust among supply chain members. In general, the SC concept originated in the logistic literature, and logistics has continued to have a significant impact on the SCM concept (See: Heitzer and Rendel, 2003; Anklesaria, 2007; Bolstorff and Rosenbaum, 2007). Since the early to mid 1990s there has been a growing body of literature focusing on SCs and SCM, and this literature has resulted in a number of definitions for both concepts (Rushton et al., 2001; Handfield and Nichols, 2002; Potocan et al., 2004 – 2007; Bolstorff and Rosenbaum, 2007; Jonsson, 2008). It’s important that we share a common understanding of what is meant by SC and SCM. A SC “encompasses all activities associated with the flow and transformation of goods from the raw material stage (extraction), through to the end user, as well as the associated information flow” (Bolstorff and Rosenbaum, 2007; Jonsson, 2008). In reality, several types of SCs exist and it’s important to note several key points. First, SCs are not a new concept in that organizations traditionally have been dependent upon suppliers, and organizations have in turn traditionally served customers. SCs can be much more complex (in terms of the number of participating parties) than other business/organizational linkages, and coordinating complex SCs is likely to be more difficult than doing so for less complex chains. SCM can be defined as “the systematic, strategic coordination of the traditional business functions and the tactics across these business functions within a particular company and across businesses in the SC, for the purpose of improving the long-term performance of the individual companies and the SC as a whole” (Waller, 2003; Blanchard, 2006; Slack et al., 2006; Bolstorff and Rosenbaum, 2007). Successful SCM requires companies to accept an enterprise-to-enterprise point of view, which can cause organizations to accept practice and adopt behaviors that haven’t traditionally been associated with buyers-seller interactions. Moreover, successful SCM requires companies to apply the systems approach across all organizations in the SC. When applied to SCs, the systems approach suggests that companies must recognize the interdependence of major functional areas within, across, and between firms. In turn, the goals and objectives of individual SC participants should be compatible with the goals and objectives of other participants in the SC. How does SCM changes relations between companies (Potocan, 2004; Potocan et al., 2004 – 2007)? Conventional wisdom suggests that company-versus-company competition will be superseded in the twenty-first century by supply-versus-SC competition. While this may occur in a few situations, such competition may not be practical in many instances because of common or overlapping suppliers or the lack of a central control point, among other reasons. Rather, a more realistic perspective is that individual members of a SC will compete—based on the relevant capabilities of their supply network—with a particular emphasis on serving or sourcing immediately adjacent suppliers or customers.
The Corporate Governance Characteristics of Financially Distressed Firms: Evidence from Taiwan
Chingliang Chang, Kainan University, Taiwan
The recent financial distress and bankruptcy of the US corporate giants suggest that boards have not performed their fiduciary duties well. This paper examines which corporate governance characteristics, if any, are correlated with financial distress by a sample of Taiwanese listed firms. Hypotheses are tested by combining outside directors, CEO duality, equity ownership by insiders, female directors, board size, multiple directorships, and director tenure. Results from logistic regression analysis show that board independence: boards with larger percentage of outside directors are less likely to fall into financial distress than boards with a lesser percentage. Another result indicates that there is a positive correlation between board size and financial distress. This paper concludes with some thoughts on the need for controlling CEO power when modeling the efficacy of corporate governance. The financial distress of American corporate giants, such as Enron, Worldcom, Adelphia, and Anderson, has led to criticism of boards of directors and accusations that corporate boards aren’t doing their jobs. “We think Enron board was asleep at the switch and fell down on the job,” said Senator Carl Levin a member of the Senate investigations subcommittee investigating the meltdown of Enron. (NYSSCPA. Org News Staff, 2002). Many of the corporate governance reforms are intended to further the ability of directors serving on corporate boards to perform their function effectively. To what extent will these reforms prevent similar financial distress from recurring? (Petra 2006). Many Taiwanese listed firms experienced financial distress in 1998 and 1999, the period of the Asian financial crisis. These firms were accused of over-leveraging and over-investment in the stock market. It uncommonly observed market manipulation that these firms set up wholly owned subsidiaries to buy back the shares of the parent firms.(1) To obtain extra funds for more repurchases, the controlling shareholders pledge their stocks to financial institutions. If the stock price goes up, the capital gains go into their pockets; if the stock price turns down, they tend to embezzle corporate funds to support the stock price for the fear of a stop-loss sale of the pledged stocks by the financial institutions. Financial distress seems to be an inevitable consequence for these firms. Taiwanese financial analysts, investors, and accounting professionals are seeking an early warning information for financial distress. This paper, thus, examines relationships between corporate governance and financial distress for prediction purpose. While ownership structure (e.g., stock pledge) and board composition (e.g., the board seats held by the largest shareholder) have been examined by Lee and Yeh (2004), other corporate governance characteristics (e.g., outside directors and female directors on the board) that may be related to firm’s financial performance have not been included in their examination. This paper seeks to correlate full or robust characteristics with the probability of financial distress. This paper proceeds as follows. First, we offer presentation and discussion of hypotheses. Next, we construct methodology. We then discuss our results after which we offer concluding thoughts. This paper explores possible relationships between corporate governance and financial distress. Instead of focusing on a single aspect or even several factors of corporate governance, this paper looks at seven factors: 1) whether outside/independent directors serve on the board; 2) whether firms have CEO-board chair duality; 3) whether insiders own equity; 4) whether female directors serve on the board; 5) how large is board size; 6) whether the director serves on multiple boards; 7) the period of duration the director has served. If some characteristics turn out to be significant, then not only can they serve as an early warning information or prediction model for financial distress, but also both firms and governance experts should pay closer attention to these characteristics. On the other hand, if no characteristics stand out as significant, then researchers interested in possible causes of financial distress must turn to unethical behaviors of directors and probe more deeply into power dynamics of the board. Hypothesis 1: A Board with a Smaller Percentage of Outside Directors Has a Greater Probability of Financial Distress. Baysinger and Butler (1985) observed that firms with above average performance have higher percentage of outside directors than firms with below average performance. Hermalin and Weisbach (1998) predicted that the probability of independent directors being added to the board rises following poor firm performance. They also found that the proportion of outside directors on the board decreases over the CEO’s career. This finding suggested that a board with few independent directors is associated with poor performance. Osterland (2004) reported that several firms have set up formal board-level committee to monitor corporate disclosure. Outside directors, by virtue of their position and presumed independence, are likely to ensure transparency. This prompts Alinkya, Bhojraj, and Sengupta (2005) to predict that firms with a larger proportion of outside directors have a greater propensity to issue forecasts which are more accurate and less optimistically biased. With this hypothesis, this paper tests whether director independence correlates with financial distress. In this study, the variable (director independence) is measured by the percentage of outside directors. This percentage is calculated as the number of outside directors divided by the number of total directors. Hypothesis 2: A Firm with the CEO Serving as Chair of the Board of Directors Has a Greater Probability of Financial Distress. Rechner and Dalton (1991) found evidence that when examining accounting based measures of ROE, ROI, and profit margin, firms that have not joined the CEO and board chairman into one position outperform firms that have joined the two positions into one position. Jensen (1993) argued that the combined structure is an inappropriate way to design one of the most critical power dynamics in the firm.
How Machs Behave: Self and Peer Ratings
Dr. Loretta F. Cochran, Arkansas Tech University, Russellville, AR
Dr. David W. Roach, Arkansas Tech University, Russellville, AR
Dr. L. Kim Troboy, Arkansas Tech University, Russellville, AR
Faculty members are increasingly asked (and want) to challenge students to think critically about a variety of issues, including their own thinking/performance and the thinking/performance of others. Faculty members try to prepare students for the “real” world. At the same time, class size is increasing at many universities. Though not a substitute for instructor evaluations, self and peer evaluations do offer a mechanism for providing feedback regarding performance. Self and peer appraisals force students to examine critically their own thinking/performance and the thinking/performance of others. In their future work lives, students will almost certainly face the issue of evaluation, either as the rater and/or as the ratee. It is in this context that the current study investigates the use of peer appraisals in the classroom. This study builds on previous research that found students could provide accurate ratings. Given accurate student ratings, we consider if student ratings can be affected by whether or not they expect to interact with and discuss their ratings with ratees. The ability of specific personality attributes, such as Machiavellianism, to moderate this relationship is also considered. Specifically addressed is the impact of the interaction between rater anonymity and power on leniency in subjective evaluation of a qualitative assignment. Preliminary results indicate that there are some differences requiring further exploration. Faculty members are increasingly asked (and want) to challenge students to think critically about a variety of issues, including their own thinking/performance and the thinking/performance of others. Faculty members, especially business programs, try to prepare students for the “real” world. At the same time, class size is increasing at many universities. Though not a substitute for evaluations by the instructor, self and peer evaluations do offer a mechanism for providing evaluative information regarding performance. Self and peer appraisals also force students to examine critically their own thinking/performance and the thinking/performance of others. In their future work lives, students will almost certainly face the issue of evaluation, either as the rater and/or as the ratee. It is in this context that the current study investigates the use of peer appraisals in the classroom. In previous research, students seem to be able to give ratings that mirror those given by faculty (Cochran, Roach, and Mason, 2004). Hence, though an individual student’s ratings may be suspect, the average of several students’ ratings provides a reasonably accurate measure of peer performance (Cochran et al., 2004). This study builds on this finding and explores the impact of individual differences on student ratings. What impact does rater anonymity carry? If anonymity has influence, does a student’s sense of personal power have any influence on peer rating? Peer rating in a classroom setting has been used for decades by faculty wanting to improve the accuracy of faculty scores (Cook, 1981) and also provide faculty with some indication of how equitable student groups are when assigning group work (Hass, Hass and Wotruba, 1998). Chen and Lou (2004), using expectancy theory, found that students favored using peer evaluation to determine peers’ grades in a group project setting. Performance appraisals are used as the basis for significant decisions in organizations (pay, promotion, selection for training, grades, etc.). In recent years, several authors have argued that multi-source feedback, or 360º feedback, provides more complete and accurate feedback than feedback from a supervisor alone. Several studies support the idea that overall performance improves following multi-source feedback (Atwater, Roush, & Fischtal, 1995; Johnson & Ferstl, 1999; Reilly, Smither, and Vasilopoulos, 1996). In this study, we consider influences for these ratings. The use of ratings assigned in a performance appraisal assumes that the ratings are accurate. It is possible, however, that ratings are greatly affected by factors unrelated to actual performance, including political considerations. As one manager put it, There is really no getting around the fact that whenever I evaluate one of my people, I stop and think about the impact – the ramifications of my decisions on my relationships with the guy and his future here. I’d be stupid not to. Call it being politically minded, or using managerial discretion, or fine tuning the guy’s ratings, but in the end I’ve got to live with him, and I’m not going to rate a guy without thinking about the fallout. There are lots of games played in the rating process and whether we (managers) admit it or not we are all guilty of playing them at our discretion (manager interviewed by Longenecker, Sims, and Gioia, 1987). The idea that raters are able but often unwilling to provide accurate ratings is supported by research that manipulates the extent to which raters are held accountable for their ratings. Raters may be more likely to provide favorable public appraisals when they think they will be held accountable for their ratings. For example, Mero and Motowidlo (1995) report that raters made to feel accountable provided more favorably than raters who were not accountable. Roach and Gupta (1990; 1992) examined the impact of rating purpose and anticipated future interaction in a laboratory experiment designed to heighten realism. Roach and Gupta (1990; 1992) videotaped students in pre-existing teams discussing and analyzing a case for a management class. Students who analyzed the case and completed the appraisals at mid-semester (and who expected to and did interact with teammates for the rest of the semester) provided more lenient ratings than students who analyzed the case and completed the appraisals in the last week of the semester.
A Study of Management Efficiency for International Resort Hotels in Taiwan
Jung-Feng Cheng, National Cheng-Kung University
Dr. Chia-Yon Chen, National Cheng-Kung University
Dr. Chun-Chu Liu, Chang Jung Christian University
In recent years, globalization has increased and the barriers to travel gradually have disappeared, thus promoting rapid growth in tourism worldwide. In 2002, the Taiwanese government began the “tourist double-up strategy” which drove up tourism. Hotel enterprises expanded their branches and national/international foundations and organizations all came in for investment. The doubling of the markets also increased competition. This paper endeavors to understand the business efficiency involved in operating resort hotels by reviewing related document that comment on the advantages and disadvantages of recreational hotels, the relative cooperative efficiency of business types and analyzing the improvement measures of inefficient hotels. This research applies Data Envelopment Analysis (DEA) to calculate the production efficiency, pure technical efficiency, scale efficiency, slack variable analysis, average overall technical efficiency score for independent and chain hotels, average scale efficiency and average pure technical efficiency, efficiency rank for distributed locations, scale returns, and sensitivity analysis for each of the 13 evaluated international resort hotels in Taiwan. The results provide information for future management enterprises. In recent years, globalization has increased and the barriers to travel gradually have disappeared. The rise in developing countries and increases in free trade all promote rapid growth in tourism worldwide. In 2002, the Taiwan government began the “tourist double-up strategy.” The total number of tourists traveling to Taiwan in 2002 was 2,977,692 and the foreign exchange income was $US 4,584 million. In 2006, the tourist number increased to 3,519,827 with an income of $US 5,214 million. Hotel enterprisers expanded their branches and national/international foundations and organizations all came in for investment. Therefore, the mission for hotel management was to increase promotion measures, bring out characterized products, reinforce advertisement, improve service quality, create a friendly accommodation environment, and strengthen operation quality to attract tourists’ expenditure and amplify their satisfaction. While every business wants a great outcome with little effort, The reality is that the efficiency of the organization is proportional to the amount of service provided. The process of measuring efficiency is called Efficiency Evaluation, but there is no objective evaluation standard in management. The normal methods of evaluating efficiency are: 1. Proportion method, which often is used in organizations. The data can be obtained and calculated easily. But the method only can deal with a single input/output, which means that what needs to be improved is not shown. Yet, organizations in general normally have multiple inputs and outputs, so this method is not a good one. 2. Regression analysis method, which is the relationship between independent and dependent variables. The variables for the data must be assumed to satisfy a linear relationship and then use the estimated residual term to evaluate the efficiency for each unit. But the efficiency distribution must satisfy the assumption of normal distribution. This method only can apply on a single output, for which the evaluation basis is the average of the multiple evaluation units, thus neglecting the special factor of a single unit. 3. Data Envelopment Analysis (DEA), which uses a mathematics programming model to calculate the best multiplier for inputs and outputs. This method can calculate the relative efficiencies for multiple inputs and outputs of many decision units, and the evaluated efficiencies are the best results under objective conditions. The method also can evaluate the efficiencies during different periods for each unit and whether or not the efficiency has increased or decreased. Therefore, this efficiency evaluation method is accepted by most people. Usage of DEA to analyze hotel efficiency is seen commonly in literature. For example, Anderson, Fok and Scott (2000) used the CCR model to analyze hotel efficiency. Brown and Ragsdale (2002) applied the CCR model and cluster analysis to evaluate benefits for hotel market competition. Chang and Hwang (2003) applied CCR superefficiency and Malmquist models to evaluate the changes in operation efficiencies for hotels in Taiwan during 1994–1998. Chiang, Tsaim, and Wang (2004) applied CCR and BCC models to analyze the efficiency of Taipei hotels in 2000. Barros, Mascarenhas, and Maria (2005) applied technical and distribution efficiency to analyze the efficiency of small chain hotels. Barros, Peter, and Dieke (2008) used CCR and BCC models to discuss the efficiency of African hotels. However, few studies apply the CCR model for sensitivity analysis to evaluate hotels’ advantages and disadvantages while using slack variable analysis to address whether or not it’s efficient to invest, and to provide analysis for improvement measures. So, this paper will use DEA to discuss the efficiency of international resort hotel management in Taiwan to answer the above questions.
Traditional Small Business Web Site Attitudes, Usage, and Satisfaction
Dr. Steven J. Anderson, Austin Peay State University
Dr. John X. Volker, Austin Peay State University
Dr. Michael D. Phillips, Austin Peay State University
This paper presents the results of a web site usage survey administered to a working population of 292 traditional small businesses and members of the local chamber of commerce of a midsized southern town which produced a net usable response rate of 22%. The traditional small businesses were selected out of a chamber of commerce membership of 1900 by excluding franchised, governmental, insurance, medical, and professional businesses and those employing more than 500 people. The questionnaire, which contained a five point Likert scaled web site attitude section, a specific traditional small business web site ownership and usage section, and a five point Likert scaled web satisfaction section, was developed and administered on a telephone interview basis in November of 2006. Results indicated that overall traditional small business web site usage is greater than literature reviews on small business might suggest with traditional small business web site ownership and usage reported at 60%. Significant differences between traditional small business demographics relative to web site ownership are identified. Significant differences between traditional small business web site attitude responses relative to web site ownership are also identified. A rank ordered profile of traditional small business web site owner’s levels of web site satisfaction is discussed with generally high levels of satisfaction reported. The internet and e-commerce has become an ever more pervasive presence and activity in the economy of both the United States and the world. Although one might expect small businesses to act in an entrepreneurial manner and embrace the web and the internet, a review of the literature indicates that this is not necessarily the case. An SBA (2000) report indicated that selling or e-commerce over the web was not common for most small firms. Given the continued growth and importance of the internet and the World Wide Web it becomes imperative that the role of small business in its development be continually examined. This study examines the use of and satisfaction with web site usage in firms defined as traditional small businesses. The authors addressed three broad questions in this study: (1) what are small business web site usage attitudes, (2) which businesses are using the web to improve their business profile, and (3) how satisfied are they with the results of their web site usage? Close examination of traditional small businesses web usage at a micro-level in a limited geographic area produced results different from those expected from prior macro-level research in that traditional small business web site usage rates were high at 60% with concurrent high levels of web site usage satisfaction. This study’s sample is somewhat unique in that it is more narrowly defined relative to the type and definition of small business than is found in other studies of small businesses involvement with the web. The authors attempted to isolate and survey those firms which truly fit the traditional view of small business and exclude those businesses which had grown larger, were unduly influenced by businesses which were not classified as small, or which were considered franchises and would be following franchise guidelines and would benefit from a franchise web site. The survey results, analysis, and discussion therefore represent the typical small business in most communities in that they are primarily service and retail business “proprietorships”. Many have adapted to the information age but they have not walked away from their roots in the community in order to embrace a larger geographic market than was possible prior to their adaptation and embracing the functionality of a web site. This study also examines the satisfaction that the surveyed owners have in their decision to adopt a web site to enhance their businesses. This study addresses perhaps the authors’ greatest concern relative to internet and web site use by small business in that the traditional small business owner may not recognize the potential of the world wide web nor will they recognize the double edged sword that it represents. If small businesses can expand their markets to other geographical areas using web sites, it follows that other competing business in those different areas can also expand into their markets using web sites. Although one might expect small businesses to act in an entrepreneurial manner and embrace the web and the internet, a review of the literature indicates that this is not necessarily the case. An SBA (2000) report indicated that selling or e-commerce over the web was not common for most small firms. However, the SBA (2000) also stated that they expected that a majority of small businesses would be using the web to conduct business by 2002. Important current research questions should be “Has this occurred?” and if so, what are the characteristics, attitudes, and levels of satisfaction with this increased usage as a matter of hypotheses testing?
Gray Competitive Model for Comparison of Household Electronic Appliance Industries in Taiwan and Mainland China
Dr. Kuo-Wei Lin, Hsuan Chuang University, Hsinchu City, Taiwan, R.O.C
Dr. Che-Chung Wang, Shih Chien University, Taipei, Taiwan, R.O.C
The Gray System Theory is characterized by trying to find the best decision-making plan by overcoming the conflicts and trade-offs between the objectives. This is most useful when the information is incomplete and all the multiple objects wish to obtain the greatest satisfaction. Currently this theory has been widely used in management science, therefore, this study is aimed at applying the characteristics of the Gray Theory to develop a model for evaluating the competitiveness of the household appliance industry. Moreover, this model is very simple in its application. A just and objective analysis on the competitiveness of each enterprise can be done as long as the financial statements publicized by the listed companies in past years are available. Considering that the household appliance industries in Mainland China are now rapidly expanding, the evaluative indexes in this study are particularly established through analyzing the competitive situation and strategies of the household appliance market in Mainland China. Finally, the financial data publicized by the listed companies in both Taiwan and Mainland China will be cited to verify the practicality of this model. The household appliance industry is one of the most important industries that affects people’s lives. This attracts an extremely large proportion of consumer consumption, so all the highly industrialized nations attach importance to its development. When Mainland China began to reform in 1979, the domestic household appliance industry lagged a great extent behind other countries. However, after more than 20 years’ efforts to attract foreign investment and technology, Mainland China has accumulated a lot in capital, technology, and experience. Therefore, the current household appliance industry in Mainland China is no longer as weak as in the past. In recent years, cashing in on its advantage of cheap labor and land costs, China’s household appliance industry has even been promoted to the international market. This development is severely threatening the survival of household appliance industry globally. According to the data (Table 1) provided by China Statistical Yearbook (2002), in the 1980s the production value of Mainland China’s household appliance industry was rather low. After 2000, however, it was obvious that Mainland China’s household appliance industry was rapidly expanding with an amazing force. The annual growth rate of production volume of Mainland China’s four basic household appliances which include refrigerators, air conditioners, washing machines and television sets each exceed 10%. This has seriously encroached the survival space of household appliance industries in developed regions. Therefore, in analyzing the household appliance industry competitiveness, the model for evaluating the competitiveness must be established by means of analyzing the current competitive situation and strategies of Mainland China’s household appliance industry. After more than 20 years of development, Mainland China has gradually established a consummate system in its household appliance industry, which has now been mainstreamed into the world market competition. By observing the changes in its competitive behaviors, the following three points in the situation of Mainland China’s household appliance industries should be paid attention to by the circle of household appliances:Due to its low production costs, many renowned household appliance companies have moved their production plants to Mainland China. Examples are abound. In August 2001 Japan’s Panasonic Co. announced the relocation of its microwave oven plant from Kentucky, USA to China. Toshiba Co. also announced that they will stop manufacturing CRTs of different types in Japan, with the production lines moved to China. Three major Japanese TV manufacturers Toshiba, Sanyo, and Panasonic subsequently moved their TV manufacturing sites to China soon afterwards. Such other products such as video cameras/camcorders, photocopiers, air conditioners, and optical storage drives that have brought so much foreign exchange earnings into Japan at the end of the 20th century have also gradually been moved to mainland China. Therefore, Su (2002) announces that Mainland China is now the largest production base of household appliance products.
Incidence of Social Protection Expenditure Convergence in the European Union
Dr. Tiia Püss, Mare Viies, Tallinn University of Technology, Tallinn, Estonia
Reet Maldre, Tallinn University of Technology, Tallinn, Estonia
Notwithstanding that social policy decisions are made on the EU Member States level, operating in a common economic space as well as common objectives under the Open Method of Coordination push social policies in Member States towards convergence. This article seeks to evaluate and verify the presence or non-presence of convergence in social protection expenditure in EU-15 countries and find which have been the most significant conditional factors that have influenced this process. We study the incidence of convergence in social protection expenditure in the EU using absolute and conditional b-convergence tests. Nevertheless, the organisation and financing of social protection systems is a responsibility of each European Union (EU) Member State, the EU has a growing coordinating role through the EU legislation to ensure adequate protection of people. The promotion of closer cooperation among the EU Member States for the modernisation of social protection systems started with the implementation of the Open Method of Coordination (OMC) adopted in the EU Lisbon summit in March 2000. In the framework of the OMC, common social targets are negotiated and common indicators to monitor the situation are defined, but the measures for achieving the objectives are decided by the Member States. OMC guides the Member States to work toward the common social policy goals of Europe, hence toward the harmonisation of social levels. Notwithstanding that social policy decisions are made on the level of EU Member States, economic integration as well as operating in a common economic space also indirectly push social policies towards convergence.Social protection expenditure in EU-15 increased in 1991-2005 in connection with the expanding needs for and rise in the level of social protection. Average social protection expenditure in EU-15 in 1991 was 25.2% of gross domestic product (GDP), in 2005 27.8%, whereas Sweden spent on social protection 32%, but Ireland only 18.2% of the GDP in 2005 (European, 2003; European, 2008). Social protection expenditure per capita in the same period increased from 3840 to 7364 purchasing power parity (PPS), whereas the amount and change rate of the expenditure vary considerably from country to country. The expenditures were the biggest in Luxembourg (12946 PPS) and Sweden (8529 PPS), the smallest in Portugal (4086 PPS) and Spain (4775 PPS). While EU-15 countries spend on social protection on average 27.8% of GDP, then in new Member States which became members in 2004, the expenditures on social protection are up to one half smaller today. In 2005, average social protection expenditure in EU-10 accounted for 17.6% of GDP and per capita expenditure only 3105 PPS. At the same time, growth of expenditure in these countries has been fast, primarily as a result of rapid economic growth, of course. The EU-25 countries with average or above-average ratios of GDP accounted for 39.6% of the EU population, the group with ratios between 22.3% and 27.2% accounted for 30.0% of all EU inhabitants, and those spending between 17.4 % and 22.3 % of their GDP on social protection for 21.9%. Countries that spent less than 17.4% of their GDP on social protection accounted for only 8.5% of the EU population. Expenditure was the biggest in Sweden (32.0%) and in France (31.5%) and smallest in Latvia (12.4%) and Estonia (12.5%). According to the economic theory of convergence, the economic development level of less developed countries should approach the level of more advanced countries with the same economic resources or fundamentals. Accordingly, estimates of social protection expenditure indicate faster development of countries with lower level of social protection in recent years. Many reforms accomplished there have helped to raise the social protection level in less developed countries. A comparison of social protection expenditure and evaluation of changes there should also characterise the social convergence process between EU countries. This article seeks to evaluate and verify the presence or non-presence of convergence in social protection expenditure in EU-15 countries and find which have been the most significant conditional factors that have influenced this process. We assess the social protection level in EU countries with the help of several indicators. The share of social protection expenditure in GDP gives us the most general idea about convergence process in and between the Member States. Another indicator we use is social expenditure per capita, which is used to reduce the influence of price differences between countries expressed on the basis of PPS. We study the incidence of convergence in social protection expenditure in EU, using absolute and conditional b-convergence tests. Our previous tests for the presence of convergence in four major functions of social protection (old age and survivors, sickness/health care and disability, family/children, unemployment) and health care expenditure in EU (Püss et al., 2003; Kerem et al., 2005) indicate the presence of convergence to a larger or lesser extent. We use harmonised data of social protection expenditure collected by Eurostat and European Commission. Our sample covers the period of 1991–2005. The countries under study are the EU-15 Member States, as the Member States acceded in 2004 and later did not belong to the common economic space of EU in this period.
Land Installment Contracts: Legislative Swing in Seller’s Favor is a Necessity in Today’s Declining Housing Market
Sherri R. Heyman, Esq, Law firm of Rosenburg, Martin, Greenberg, LLP, Baltimore, MD
Benjamin A. Neil, Esq, Towson University, Towson, MD
In today’s sluggish, but bargain filled real estate market, purchasers unable to qualify for conventional bank financing, may still be able to acquire their dream homes if seller financing, in the form of land installment contracts, is available (See References). This alternative financing scenario affords a less than credit worthy purchaser with the means necessary to become a homeowner. At settlement, the purchaser takes possession of the property, while the seller retains actual legal title. During the contract term, purchaser makes installment payments to the seller until the contract price has been paid in full. In the event of purchaser’s default, the purchaser forfeits all sums paid under the contract, as well as the property. Due to the often harsh consequences of a default under a land installment contract, many states have enacted statutes eliminating property forfeiture as an available remedy. In addition, many courts of equity have either refused to enforce forfeiture provisions, or have severely limited their use in land installment contracts. This pro-purchaser intervention by legislatures and judges across the United States has created uncertainty as to the enforceability of forfeiture provisions, thereby resulting in a disincentive for sellers to extend this form of financing to needy purchasers. This Article will examine the shift in the United States from strict enforceability of forfeiture provisions in land installment contracts, to the imposition of procedural requirements prior to forfeiture, and even the total elimination of forfeiture as a remedy. It will then address the impact such sweeping change has had on the viability of the land installment contract. Finally, the authors propose the reversal of paternalistic protections afforded purchasers, in favor of extending a protection to sellers in the form of strict enforcement of forfeiture provisions, or the guaranteed application of all contract remedies available at law or equity. A return to the favorable treatment of land installment contracts is an economic necessity in today’s tenuous real estate market. In the United States, the three primary financing transactions employed to secure a loan for real property are the mortgage; the deed of trust; and the land installment contract (Cribbet and Johnson, 1989). While all three forms of financing enable a purchaser to borrow money to finance his acquisition of real property, the decision to choose one particular method over another is often dependent upon the remedies available to seller in the event of purchaser’s default. The most frequently used forms of conventional financing are the mortgage and the deed of trust. The mortgage is an instrument recorded among the land records as security for a promissory note given by purchaser (Osborne, 1970). A conventional mortgage is a device used by a purchaser desiring to buy property by transferring to the bank or lender a lien or defeasible legal title in exchange for the price, or a portion of the price of the property. Usually, the purchaser provides seller with an initial down payment and a note. In return, seller delivers to purchaser the deed to the property. In the event of purchaser’s default, the seller must seek judicial intervention by suing on the note, or foreclosing on the mortgage. The second financing method, the deed of trust (Nelson & Whitman Supra note 1, §1.6 at 11, Id. §7.19 at 513.), is an instrument which takes the place of a mortgage, and legal title is granted to one or more trustees as security for repayment of the debt. The deed of trust is more favorable to lenders because it provides a third party trustee with a power of sale, and does not require judicial foreclosure. While the aforementioned conventional financing methods would certainly be preferred by a seller, such third-party financing is not always available to credit challenged purchasers. Where a seller has no other choice then to offer his own financing of the purchase price via a land installment contract, the seller must be afforded some benefit for the undertaking of additional risk. Due to the likelihood that a distressed purchaser would be unable to supply a large down payment, the seller must be in a position to quickly realize his security upon default, and the standard forfeiture provision found in land installment contracts is the most efficient means of accomplishing such realization. Like conventional financing methods, land installment contracts obligate the purchaser to make installment payments of principal and interest to the seller, pay taxes and insurance on the property, as well as repair and maintain the property. Unlike third party financing, land installment contracts do not typically subject purchasers to substantial transactional costs. The seller offered financing obviates the necessity to incur costs associated with title insurance, property appraisals, credit reports, recordation costs and the like (Mixon, 1970). Unlike a third party lender, the seller has first hand knowledge of the property securing payment of the debt; therefore, the lending seller has no real need to incur title related costs. Unfortunately for the unsuspecting purchasers, without the benefit of a third party lender insisting that purchaser obtain title insurance, the purchaser may not discover that the property is encumbered. Since the purchaser of a land installment contract does not acquire title until all contract sums have been paid to seller, the initial concern for the marketability of title is postponed until the time of title transfer, which for some purchasers may be too late. Some jurisdictions have attempted to protect purchasers and diminish the possibility of title issues by requiring the seller to warrant marketable title (See Pa. Stat. Ann. tit. 68 §907 (a) (1) (1994). The state of Maine requires sellers to disclose the existence of encumbrances, (See Me. Rev. Stat. Ann. Tit. 33 §482 (J) West (1964), while Maryland has restricted the placing or holding of a mortgage on the property that exceeds the balance due under the land installment contract (See Md. Real Pro. Code Ann. § 10-103(d).
A Study Using Theory of Planned Behavior Model to Explore Participation Motivational Scale for Project Management Training Program
Dr. Ming-shan Chang, Chia-Nan University of Pharmacy and Science
In the era of knowledge economy, Project Management has been already the necessary management science and technology for organization development. The International Project Management Association (IPMA) was established from 1965 of Europe, and has devoted more than 40 years for project management training in the industrial field. It is the most historical and prosperous international project management system. Oncoming globalized competition, Taiwan Project Management Association (TPMA) assists and promotes Taiwan’s industrial competitiveness. TPMA introduces the knowledge system of International Project Management. TPMA promotes IPMA-D level Project Management Instructors. At present, there are 82 colleges and institutions participating and they are aliened strategically with TPMA. In academy, there were many researches exploring the courses participation motivations; however, fewer researches explore the nontraditional orientation. This research used the viewpoints of “Planned Behavior Theory” to build and analyze the participation orientations for D level Project Manager Instructors’ training program. Using “Planned Behavior Theory” as an evaluation tool for human behaviors is the academic mainstream in social science. Therefore, Theory of Planned Behavior as foundations constructs participation orientations for D level Project Manager of IPMA International Project Management Knowledge System, which is appropriate for Taiwan’s universities and colleges. In this research were found significant participation orientations for those project management students, especially behavior attitude. Taiwan Project Management Association (TPMA) founded in June 2002. Their mission statements were 1). to research the best approach of project management and promote applications to varsity industries, 2). to provide consulting services for business and help to solve practical project management, 3).to distribute project management in publication and to provide communication fields for industry and academy, 4). to promote project management training courses and cultivate professional people what business need (Taiwan Project Management Association, 2008). After working hard, Taiwan Project Management Association joined the international project management Association officially in October, 2004 (International Project Management Association, IPMA). Becoming an official member of International Project Management Association, TPMA started to promote project management training program in academy. In the end of October, 2008, 82 University have already signed strategy alliances; at the same time, 11 project management research development center operated in universities (Taiwan Project Management Association, 2008). In addition to continuing to promote programs in academy, TPMA opens IPMA-D level Project Management instructor training class for industry at Taishuo Heavy Industry Limited Liability Company. At same time, TPMA coordinated to hold on a seminar with South Scientific and Technical Park of Taiwan, named as “Seminars for Project Management Practical Application in Academy and Industry”. Not only this seminar, TPMA also coordinated held on a seminar with industrial technology research institution in Taiwan south learning center, named as “IPMA project management knowledge system”. TPMA develops rapidly (Taiwan Project Management Association, 2008). For this, IPMA-D level Project Management Instructors have already been the irresistible mainstream in academy and industry. Due to this, the training instructors’ learning motivations would be the research issue. Although there were many motivational participation orientations and related research in academic research, very few cases were discussed and explored from non-educational field aspects. Due to the core value of IPMA-D level Project Management Program Instructors’ training courses is to apply knowledge in practice. This research proposes to use Theory of Planned Behavior to analyze and construct the motivational participation orientations scale for IPMA-D level International Project Management instructors’ training courses. For human behavior and motivations, Fishbein & Ajzen (1975) proposed Theory of Reasoned Action (TRA) which was a widespread approved behavior cognition theory in social psychology domain. The foundation of TRA was from social psychology which synergy the relations including attitudes, manners, and behaviors. Theory of Reasoned Action is appropriated for personal intention control. However, many factors were influenced by personal intention control in real life. For this, Ajzen (1985) extended Theory of Reasoned Action, and proposed Theory of Planned Behavior. Due to this, this research proposed Theory of Planned Behavior as foundation. Also, this research developed and constructed questionnaires which are appropriated for Taiwan’s universities and colleges. The survey questionnaires were designed for the motivational participation orientations scale for IPMA-D level Project Management Program instructors’ training courses. The purpose of this research would use Theory of Planned Behavior as foundation and intend to construct participation motivational orientations scales for IPMA-D level Project Management Instructors’ training program. For the literature review, this research would review three dimensions including IPMA International Project Management Knowledge System, Theory of Reasoned Action, and Theory of Planned Behavior.
A Study of the Current Learning Organization Profile to Elementary Schools at Pingtung County, Taiwan
Dr. Ching-wen Cheng, National Pingtung University of Education, Taiwan
Shakespeare said, “To be or not to be, that is the question.” For the organization in today’s world, the question is to learn or not to learn. In this global competitive era, learning is the only way to improve one’s survival ability, and so does an organization. For a Confucian cultural society such as Taiwan, the idea of learning has already accepted in people’s daily life. Although people in Taiwan believe that learning is very important for one’s life, they usually think that a person only needs to learn when he studies at school. In Taiwan, the idea of learning organization has been introduced form the Western culture, and is not developed from Confucius’ teaching. Is the idea of learning organization totally accepted to a Confucian society? That is the question which the researcher tries to find the answer. The main purpose of this study is to explore the current learning organization profile to elementary schools at Taiwan. Due to the limitation of research resources, the study only surveys elementary school teachers at Pingtung County, Taiwan. Due to a famous book The Fifth Discipline published in 1990, a revolution issue of “learning organization” became a significant trend for the academic area of organization development. Not only the business leaders but also the management of other areas concerned to adopt this strategy to improve their organizations. According to Drucker (1998), the concept of “learning organization” also includes education organizations, because education organizations have an increasing need to be more effectively in an ever-changing environment. In this global competitive era, learning is the only way to improve one’s survival ability, and so does an education organization. There is no doubt that education organizations should be transformed into learning organizations and focus on a continuous improvement process for a long term benefit. For a Confucian cultural society such as Taiwan, the idea of learning has already accepted in people’s daily life. Although people in Taiwan believe that learning is very important for one’s life, they usually think that a person only needs to learn when he studies at school. In Taiwan, the idea of “learning organization” has been introduced form the Western culture, and is not developed from Confucius’ teaching. Is the concept of “learning organization” totally accepted to a Confucian society? That is the question which the researcher tried to find out the answer. The main purpose of this study is to explore the current learning organization profile to elementary schools in Taiwan based on the viewpoint of learning organization. Unquestionably, Peter Senge is considered as the father of “learning organization,” because his best-selling book The Fifth Discipline makes the concept of “learning organization” gained broad recognition (Dumaine, 1994). According to Senge (1990), the five disciplines include mental models, personal mastery, system thinking, shared vision, and team learning. It means that people should give up their old ways of thinking function, learn to have a more open mind with other individuals, take a full vision to understand how their organization functions, form an organization vision everyone can agree with, and then work together for achieving that organization vision. Chris Argyris is well known for “double-loop learning” and a significant pioneer of the thoughts about how learning can help the organization development process being successful (Abernathy, 1999). Argyris (1994) believes that it can produce real changes for the organization when people learn to evaluate their own behaviors, take responsibilities for their actions in the workplace, and realize the potential threatening information for their organization. He also recommends that the corporation management should encourage employees to think constantly and creatively about the real demands of the organization. Another significant scholar who contributed in the field of “learning organization” is Margaret Wheatley. He suggested that the management adopts the ideas of reintegration of society for organizational development (Brown, 1993; Dennard, 1996). Wheatley (1999) offered his thoughts for developing a new outlook about organizational development and these ideas were that “(1)everything is a constant process of discovery and creating, (2)life uses messes to get well-ordered solutions, (3)life is intent on finding what works but not what is right, (4)life creates more possibilities as it engages with opportunities, (5)life is attracted to order, (6)life organizes around identity, (7)everything participates in the creation and evolution of its neighbors.” The other important scholar related to “learning organization” is Donald A. Schon. His contributions focused on the field of organizational learning and have a great impact on this academic area (Lichtenstein, 2000). Schon (1983) build his model to describe the process of organizational learning based on the thoughts of reflection-in-action. There are four core themes in this model: “(1)the concept of inquiry as reflection-in-action, (2)constructing a learning dialectic in organizations, (3)the practice of learning how to learn, and (4)the commitment to a new educational paradigm that teaches practitioners how to reflection-in-action” (Schon, 1983).
The Service Quality Indicators Model for Theme Parks in Taiwan
Dr. Jiung-Bin Chin, Hungkuang University, Taichung, Taiwan
Mu-Chen Wu, Director of Library, Hungkuang University, Taichung, Taiwan
The statistics for the amount of tourists made by Bureau of Tourism, MOTC (2007) shows that the amount of tourists in the top 5 theme parks totaled more than 4.5 million men-times in 2007 with an estimated annual output value of US$150 million, including JANFUSUN FANCYWORLD, LEOFOO VILLAGE THEME PARK, YAMAY PARK, WINDOW on CHINA THEME PARK and Formosan Aboriginal Culture Village. It means the theme parks are already popular among consumers of different social stratums and have already become the one of major tourism spots for countrymen. As the countrymen demands for entertainment and tourism quality have become higher increasingly, and can’t be tolerate with few differences and high similarities among theme parks, futuristic overall development has thus become a challenge for theme parks. Therefore, theme parks shall own definite marketing objects and understanding and emphasize integrating special theme park atmosphere building, high-tech entertainment facilities and show and relative products and services as to make the theme parks owning complete service and functions for serving the purpose of perpetual business operation. This paper aims at analyzing how to construct service quality indicators model for theme parks in Taiwan, and to use 5 compositions of PZB service quality to be partial structure composition of this paper and to make references to current practical operation status as to develop the preliminary service quality indicators for this research. The Delphi Method will then be applied for integrating opinions from scholars professional with service quality or with tourism and management staff of top 5 theme parks in Taiwan to extract the finalized service quality indicators model for theme parks in this paper. Finally, conclusion and recommendation provide industry-related decision makers references to improve overall service quality in the future! Due to ever-lasting changes in Taiwan economic development and social structure and the holiday structure change caused by fulfillment of 2-day weekend since 1988, Taiwan countrymen have had thus more time for leisure and entertainment activities. Meanwhile, tourism activities have been increasingly booming because of the competent authorities’ aggressive promotion and countrymen’s significant changes in tourism habits. The statistics of Bureau of Tourism (2003) indicated that the domestic tourism men-times have increased from 90 million in 2003 to more than 100 million annually nowadays. The Report「Investigation for Influences of 2-Day Weekend on Domestic Tourism」stated that nearly 80% of interviewees claimed that they are more willing to make domestic touring and their selections are entertainment and leisure-related activities mainly. For coping with domestic tourism trend development and changes, leisure and entertainment industry have thus started to provide more varieties of services. Leisure and entertainment-related industry have claimed that theme parks benefit the most from the effects of fulfilling 2-day weekend system. The current Taiwan regulations and acts state that the regions or facilities for providing leisure, tourism and traveling include the range for other outdoor leisure and entertainment region. It means that the meaning of so-called theme parks refers to public or private corporations build outdoor leisure and entertainment facilities in Scene Areas inclusive of urban planning zone or other leisure-purpose land. However, as traditional theme parks’ design ignores or lacks theme park atmosphere building and integrating the entertainment facilities available, most theme parks own entertainment functions without being able to provide diversifying entertainment and leisure products and complete service planning and design, such as dinning, accommodation, traffic and entertainment. Theme parks are simulating theme ambiance for providing tourists entertainment and leisure experiences by complete and comprehensive planning and design. Therefore, entertain parks which had been in fashion for a time can’t satisfy domestic tourists’ demands and have already bee replaced by theme parks composed of integrating high-tech entertainment facilities, special ambiance atmosphere building and entertainment shows and activities. The statistics for the tourists in major tourism and leisure regions in Taiwan made by Bureau of tourism, MOTC (2007) stated that the men-times of top 5 private-owned theme parks in 2007 were: JANFUSUN FANCYWORLD (1,253,291 men-times), LEOFOO VILLAGE THEME PARK (940,577 men-times), YAMAY PARK (910,080 men-times), WINDOW on CHINA THEME PARK (718,351 men-times) and Formosan Aboriginal Culture Village (717,640 men-times) which amounted to more than 4.5 annually men-times in total and created an estimated annual output value of US$150 million which indicated that theme parks have been popular among people of various social stratums and one of the major favorable tourism spots in domestic tourism.
An Investigation of the Day- of- the- Week Effect on Stock Returns in Mauritius
Sawkut Rojid, M. Phil., University of Mauritius
Boopen Seetanah, M. Phil., University of Technology, Mauritius
Roodredevi Jhagdambi, Msc.,University of Mauritius
The day- of -the -week effect phenomenon is one of the most important calendar anomalies that have been observed in many stock markets in all over the world. This effect implies that a significant difference in stock return is observed on the different days of the week. This paper supplements the literature by bringing additional evidences on the day of the week effect for the case of an emerging African Stock Exchange, namely Mauritius, using daily data on the Stock Exchange of Mauritius Index (SEMDEX) from July 1989 to June 2008 and using the GMM approach. Results from the analysis did not find, in general, significant differences in stock returns across trading days in the market. The day- of -the -week effect is a form of market anomaly. Such anomalies are often tested for developed markets and there is limited evidence from emerging markets. Hence, there is a need to conduct some research about the information efficiency aspect of the Stock market of Mauritius. day- of- the -week effect is a contradiction to the Efficient Market Hypothesis (EMH), according to which no investor has the ability to have consistently have excessive returns for a long period. For many decades, many researchers have tried to determine find out if it is possible for someone to predict the future determination of stock prices. According to Fama (1970), efficient market theory states that “prices reflect all available information.”. In a perfectly efficient market it is impossible to outperform the market. Fama (1970) supported the existence of three forms of market efficiency: namely weak, semi-strong, and strong. However, there also exists the presence of so- called calendar effects, which imply that equity returns are not independent of the month of the year, week of the month and of the day of the week. These are evidences against random walk theory. We focus particularly on the day- of- the- week effect, which refers to the existence of a pattern on the part of stock returns, whereby these returns are linked to the particular day of the week. This hypothesis was first introduced by Osborne (1962), and later elaborated on by Lakonishok and Maberly (1990). Originally French (1980) referred to the tendency of average returns following the weekend to be negative. This phenomenon has been observed in many countries (both developed and emerging) and in various types of securities. In general it is reported (Cross 1973; Lakonishok and Levi, 1982, Rogalski, 1984) that there are systematically negative returns on Monday and systematically positive returns on Friday, referred to as the Monday effect or the weekend effect. Identifying the nature of calendar anomalies, if any, is of great importance to the participants of the Mauritian stock market as it implies that investors could design specific trading strategies to reap abnormal profit from these seasonal regularities. The objective of this study is to examine the day -of- the- week effect in the Stock Exchange of Mauritius (SEM), by using the daily returns of the Stock Exchange of Mauritius Daily Exchange (SEMDEX(1)) from the period July 5, 1989 to June 30, 2008. This phenomenon constitutes a form of market anomaly whereby the average daily return of the market is not the same for all days of the week. This study is believed to supplement the literature as it brings additional evidences on market anomalies from an emerging African stock market, which so far has been subjected to little or no research and by also addressing the dummy variable trap issue in market anomalies modelling. The structure of the paper is as follows: section 2 deals with a brief review of the literature, section 3 gives an overview of the Mauritian stock exchange. Section 4 describes the preferred methodology and discusses the results, and the last section concludes. The day- of- the- week effect refers to the existence of a pattern on the part of stock returns, whereby these returns are linked to the particular day of the week. The presence of such an effect would mean that equity returns are not independent of the day of the week, which is evidence against random walk theory. Cross (1973), Lakonishok and Levi (1982), Rogalski (1984), and Keim and Stambaugh (1984) concluded that there are systematically negative returns on Monday and systematically positive returns on Friday. This Monday seasonal is often called the Monday effect or the Weekend effect. This hypothesis was first introduced by Osborne (1962), and later elaborated on by Lakonishok and Maberly (1990). Originally French (1980) refers to the tendency of average returns following the weekend to be negative. This phenomenon has been observed in many countries and in various types of securities. Interestingly, Harris (1986) has studied intra-day trading and found that the weekend effect tends to occur in the first 45 minutes of trading as prices fall, but on all other days prices rise during the first 45 minutes.
Investigating the Sensitivity of Variations in the Taste Distribution to Advertising Dynamics and Pricing Strategies in a Dynamic Duopoly
Yao- Hsien Lee, Institute of Management of Technology, Chung Hua University, Taiwan
Chien- Shiun Chen, Institute of Management of Technology, Chung Hua University, Taiwan
Sheu-Chin Kung, Institute of Management of Technology, Chung Hua University, Taiwan
This paper modifies the model of Piga (1998) by introducing the taste diversity and adopts a more flexible taste distribution form for consumers’ taste differences in the framework of perfectly cooperative advertising. The results show that increasing the value of taste diversity leads to higher prices and advertisings and the number of consumers. We use numerical illustrations to present managerial implications and give some insights into the steady state prices and advertisings in the context of an asymmetrical open loop information structure. In the last of 40 years there has been a growing interest in applying differential game models to investigate optimal competitive dynamic advertising and pricing decisions. In the areas of optimal pricing and advertising there is now a considerable body of literature; the reader is refereed to the survey articles by Sethi (1977), Little (1979), Feichtinger and Jorgensen (1983), Eliashberg and Chatterjee(1985), Dolan et al. (1986), Jorgensen (1986), Feichtinger et al.(1994), Erickson (1995), and Dockner et al.(2000, chapter 11). The above literature has indicated that previous advances in obtaining a useful understanding of oligopolistic dynamic behavior stem from two primary sources: noncooperative(or cooperative) game theory and economic dynamics. However, elements from both areas have been brought together with economics and management science mostly in the dynamic models of noncooperative game. The model-based literature on dynamic cooperative advertising is sparse. Recently, an attempt to study cooperative advertising in a dynamic game framework is Piga (1998) who analyses a differential game of duopolistic competition with a differentiated product where firms can use advertising and price as competitive tools. Piga assumes a market characterized by the fact that advertising can either be cooperative or predatory (see e.g., Friedman (1983); Fershtman (1991); Martin (1993); Slade (1995)) because advertising can both increase market size and affect market shares. Following the linear approach of Hotelling (1929), the consumers are assumed to be uniformly distributed on the interval [0,1] with a density equal to N consumers per unit length. Hence, the total number of consumers in the market is N. This implies that the demand side of a market consists of a large number of consumers with identical tastes and income levels. As a consequence, the author can not address the dependence of the firms’ choices of price and advertising and corresponding the number of consumers on the taste and income parameters. Therefore, the interest of exploring this dependence to conducting a comparative static analysis showing how changes in the firms’ choices of price and advertising when taste distribution variations included in the model has been ignored. This paper modifies the model of Piga by introducing the consumer’s taste diversity parameter and adopts the assumption that each consumer is indexed by in the asymmetric case with perfectly cooperative advertising. Our objective is to examine the impact of taste diversity on the firms’ choices of price and advertising and the number of consumers in the market. It has been seldom the case to fully include in a dynamic model of advertising and product differentiation, as it is done here, both pricing and advertising strategies. By adopting a dynamic differential game framework as in Piga, we can provide a full analytical characterization of the equilibria and to compare the market outcomes of Piga’s model. It is of interest to note that the occurrence of higher prices, advertisings, and the number of consumers contrasts with findings established in the Piga’s setting, in which the taste diversity issue has not been addressed. We believe the issue should be interesting, both from a theoretical and a managerial point of view. Although some recent studies of cooperative advertising in a dynamic marketing channel include dynamic effects of advertising (e.g., Chingtagunta and Jain (1995), Jorgensen and Zaccour (1999), Jorgensen et al., (2000), and Jorgensen et al., (2001)), these works did not address the taste diversity issue. This paper will not provide a detailed state-of-the-art report on all these developments; we confine ourselves to investigate the nature of dynamic advertising and price strategies applying a differential in the presence of variations in the taste distribution. The rest of the paper is organized as follows. In section 2, we modify the model of Piga by incorporating the taste diversity to show how the firms’ choices of price and advertising and the number of consumers vary with it. In section 3, we compare the results obtained in the paper with those of Piga and interpret the results of comparison by conducting a numerical illustration.
The Effect of IMC on Brand Image of Laptops/Notebooks
Ying-Chu Lu, National Cheng-Kung University, Taiwan
Integrated Marketing Communication (IMC) is an evolutionary theory devised in the 1990s that combines the use of AD, PR, DM, SP and PS. The goal of this research is to find out how IMC, with its attributes of “one-voice, two-way communication, umbrella of all marketing mix,” works on the brand images of the competitive laptops brand industry. This research investigates (1) how to generate a marketing mix to achieve synergy and (2) the relationship between IMC implementation performance and brand image. From the 1970s to the 1980s, the US entrepreneurs developed an evolutionary theory of advertising in response to the economic recession. The American Association of Advertising Agencies (4A) named it “New Advertising,” while the ad industry veterans called it as “Whole Egg” or “orchestration” (Kalish 1990). This new approach sought to make advertising and communications more consistent and coherent. However, the fragmentation of media, new communication technologies, segmentation techniques and database applications threatened the traditional dependence on advertising agencies’ dependence (Hutton 1996), leading to the rise of integrated marketing communication (IMC). In 1950 the master of advertising, David Ogilvy, articulated the importance of brand image. He stated that all advertisement should contribute something useful to the brand image; because it is the brand the company is trying to sell for years, and the advertisements should project the same image year after year which is difficult to achieve. The very famous description of the importance of brand image of Ogilvy: “What you say in advertising is more important than how you say it!” (Ogilvy 1963) IMC involves integrating marketing mix on the basis of brand especially when entrepreneurs face the global market (Kevin 1999; Schultz 2001). A.C.Shu (1999) stated that the total value of integrated marketing communication (IMC) is to create brand equity and brand image. The mission of IMC is originated from integrated marketing communication (IMC), and the total value as a basis of entrepreneurs’ build up of IMC is to create brand equity and brand image. Brand equity and brand image is from the concept of brand coordinates within brand equity is the axis, focusing on vertical marketing for sustainable operation and competitive advantage, and brand image is the horizontal axis focusing on the horizontal marketing for joint ventures to maximize the share of market resources. According to MIC, in the laptop/notebook industry, Taiwan’s shipping quantity leads the world, with more than 90% market share. The level of brand centralization in the laptop/notebook industry is huge; the market share of the top ten laptop/notebook factories of brands is more than 85%, well over the 50% market share of the top ten desktop computer factories of brands. In the future, MIC estimates that the laptops and notebooks will replace desktop computers at increasing rate. What’s more, the competition of laptop/notebook industry is larger in terms of the product of smaller and cheaper laptop/notebook’s popularity to the consumers. The goal of this research is to determine how IMC, with its attributes of “one-voice, two-way communication, umbrella of all marketing mix” works on the brand image in the competitive laptop industry. In the past, most of the research in the area of IMC has focused on the use of marketing mix, case by case, but little research has been done on the performance of IMC implementation from the consumers’ side. Thus, this research investigates (1) how to generate a marketing mix to achieve synergy and (2) the relationship between IMC implementation performance and brand image. The laptop/notebook industry combines the electronics industry, the industrial design industry, the information technology industry, and the mobile telecommunications industry. In the future, the need for mobile telecommunications will only increase, so the laptop/notebook computer sector is becoming more and more important and competitive. We expect this research to help (1) marketers by describing the different effect of each tools of IMC marketing mix on laptop brand, (2) marketers how to use and generate an IMC marketing mix to achieve synergy and improve the brand image, and to help (3) the industrial designers by describing consumers’ preferences in choosing, purchasing, and using the laptop/notebook product.
Developing Hierarchical Structure for Assessing the Impact of Innovation Factors on a Firm’s Competitiveness - A Dynamic-Capabilities Approach
Dr. Shyh-Hwang Lee, Shu-Te University, Taiwan
The traditional corporate resource-based perspective of the past dictated that competitive advantage rested on a variety of mainstream elements related to basic core values like quality, cost and timeliness. Nowadays, innovation has become an important additional factor in the challenge to create and sustain competitive advantage in a rapidly changing business environment. However, there have been inadequacies in the conceptualization and transition into operation of organizational innovation constructs and its effective link to competitive advantage. In this paper’s dynamic capabilities approach of characterizing a firm as a collection of resources and innovation capabilities, a hierarchical structural model is developed to link innovation and core values to competitive advantage of a firm. Then innovation factors are prioritized to facilitate the understanding of the extent of the various impacts on competitiveness by applying an analytical hierarchical process integrated with a fuzzy approach. A case application taken from Taiwan’s IC design industry is demonstrated in the results section. This structure can be further utilized to assess the competitiveness of the firm itself. Managers and organizational researchers have long been concerned with studying how to build, evaluate and sustain competitive advantage of a given firm (Porter, 1980; Barney, 1986; Oral, 1986; Peteraf, 1993; Wiggins & Ruefli, 2002). Throughout the 1980s and ‘90s, business management was governed by the appeal of the sectors in which the company was competing and by the competitive position of the company in those sectors. Competitive advantage rested on a variety of mainstream values like quality, cost and timeliness during these two decades (McGahan and Porter, 1999, 2002; López, 2005). Although each factor remains important, it is unlikely by itself or as part of a group to provide a sustainable competitive advantage.
The Relationship Between School Success and the Emotional Intelligence of Primary School Headmasters and Teachers
Prof. Dr. Erol Eren, Dean, Beykent University, Turkey
Dr.Ercan Ergun and O. Cumhur Altýntas, Gebze Institute of Technology, Turkey
This study investigates the relationship between emotional intelligence and school success. To this end, empirical research has been conducted on a sample of primary school headmasters and teachers. Their emotional intelligence has been assessed and its relation to students’ success on the secondary school examination has been investigated. According to the results, a significant relationship exists between primary school headmasters’ and teachers’ emotional intelligence and their schools’ success. The number of research studies related to emotional intelligence has increased in recent years (Ashkanasy & Daus, 2002). The lack of satisfying results regarding the effect of Intelligence Quotation (IQ) on work performance in several prior research studies, however, has directed researchers to search for a relationship between emotional intelligence and work performance (Goleman, 2001). In this context, the aim of this study is to analzye the relationship between emotional intelligence and a school’s success. In this study, the emotional intelligence concept and its relationship with school success is addressed using the results of research among the headmasters of primary schools in Isparta city center. Emotional intelligence is described as the “right sensation of emotions, ability of evaluation and expression; ability of reaching emotions supporting to think and produce these ones; ability to understand emotions and emotional knowledge; ability to arrange the emotions to increase emotional and mental growth” (Mayer & Salovey, 1997: 10). According to this description, emotional intelligence has four main components (Goleman, 2001):
Purchasing Equals Happiness Equals Giving! How Do you Plan to Spend Your Weekend?
Mohammed M. Nadeem, Ph.D., National University, San Jose, CA
Happiness and peace of mind are attained by giving them to someone else. If happiness is incomplete until it is shared, then can consumers put a price on happiness? Aaker and Liu (2008) argued that when it comes to happiness and giving, we should think about time, and not necessarily about money to make others happy. Simply because beyond money, volunteering of time, and expertise (Aaker and Liu, 2008) should receive more attention due to its potential. Particularly when economic growth, by itself certainly is not enough to guarantee consumer’s well-being. Especially, when nothing in life is as important as we think it is (Kahneman, 1998) while we are thinking about it. If greater wealth implies greater happiness only at quite low levels of income then measurement of contentment becomes a reasonable inquiry. This research explores how consumer’s wealth levels have a limited impact on happiness. In addition, how people primed with time (Aaker and Liu, 2008) becomes more focused on high- rather than low-level goals due to the inherent association of time intentions and the future (Trope and Liberman 2003). This research mainly examines---if money doesn't necessarily buy happiness, what does? The final sections discuss the limitations of the exploratory study by providing conclusions and ideas for future research on how---purchasing equals happiness equals giving. Research on time and money, two fundamental resources in people’s lives, has enjoyed much resonance lately—particularly in the domain of decision making, the psychology of discount rates, and the valuation of future possibilities (e.g., Loewenstein 1987; Malkoc and Zauberman 2006; Zauberman and Lynch 2005). However, scant research has examined the downstream effects of asking individuals a simple question related to time or money such as, “How much time are you willing to donate?” or “How much money are you willing to donate?” What types of mindsets are activated when one thinks about time versus money?
Relationships among Human Capital, Human Liabilities, Self-Efficacy, and Job-Search Intensity
Dr. Yih-Yuan Yang and Dr. Ying-Chieh Yang, ChungChou Institute of Technology, Taiwan
The purpose of this study is to explore predictors of obtaining employment. The study includes six hypotheses and one research question. Through path analysis, all hypotheses are supported by findings. The survey was sent to 800 respondents, but 623 questionnaires were usable, for a response rate of 77.9%. The effect of constraint factors/human liabilities of individuals on their job-search intensity is indirect and negative. Self-efficacy mediates relationships between human liability and job-search intensity, because human liability negatively influences job-search intensity through self-efficacy. Similarly, human capital has the indirect impact on job-search intensity. The indirect effect of human liability on self-efficacy through human capital is stronger than the direct effect of human liability on self-efficacy. Therefore, human capital mediates the impact of human liability on self-efficacy. The job-search intensity model in this study, finally, provides limitations and direction for future research related to job-search intensity. From subprime mortgage, Lehman Brothers bankruptcy, Merrill Lynch’s sale to Bank of America to the collapse of Washington Mutual, not only does the financial and economic crash of 2008 destroy the fiscal and monetary systems of the U.S. and Europe, but it also results in a major geopolitical setback for these countries (Altman, 2009; Morris, 2008). Moreover, James (2009) points out that the great crisis has caused the brutal recession in G-8 (the group of highly industrialized states). That is, consumers and businesses in the countries have deeply frightened by the crisis and then in response they have sharply retrenched. Not only do theses causes lead millions of people to be laid off, but they also strongly impact the global job market.
The Mediating Influence of Service Quality Satisfaction and Information Trust on the e-CRM Process Model: An Empirical Bank Marketing Research
Dr. Shu-Fang Lai and Dr. Ying-Chien Hsiao, Takming University of Science and Technology, Taiwan
Dr. Yi-Feng Yang, Shu-Te University, Taiwan
Yuan-Chih Huang, Takming University of Science and Technology, Taiwan
I-Chao Lee, Department of Business Administration, Kao-Yuan University, Taiwan
This study aims to achieve a better understanding of a natural e-CRM phenomenon, namely the mediation effect of marketing interaction relationship on the performance of its e-CRM process. A total of 300 questionnaires were distributed in the summer of 2006 to four Taiwanese banks, which had applied the e-CRM system to their customer service operations. Mediator hierarchical regressions were applied to the results, and it was discovered that when institutions used e-CRM services with web-based applications to create and raise levels of service quality satisfaction and information trust, this resulted in improvements in customer interaction, potentially helping the institution achieve the so-called “profit-maximizing portfolio” level. Our finding of this mediating effect in the e-CRM process extends previous insights on the leadership process to the new field of e-CRM study. Most empirical banking systems experience some degree of inefficiency in their business operations and performance, especially with regard to managing the quality of buyer-seller relationships and service marketing. Many studies have concluded that these arise for the following major reasons: the service is very complex and customized in the transaction process; the external environment is dynamic rather than static; and the marketing is uncertain when sellers are uncertain as to what the service buyers want. To respond to this challenge, many studies have suggested considering the concept of customer relationship management (CRM). The concept of CRM is often defined as the marketing interaction relationship between buyers and sellers (Christopher, Payne and Ballantyne, 2002; Ryals and Knox, 2001).
An Empirical Study on Operation Efficiency for Metropolitan International Hotels in Taiwan
Wong Chao-Tung, Doctoral Candidate, Dr. Chen Chia-Yon, Professor
Cheng Jung-Feng, Doctoral Candidate
National Cheng Kung University, Taiwan
Data Envelopment Analysis (DEA) can process the relative efficiency of multiple input and output resulting from decision-makings of multiple administration units and the efficiency value evaluated by such method is the most advantageous outcome under objective assessment. Fierce competition in overall Taiwan metropolitan international hotel industry is validated by the average overall technical efficiency ratio of 0.892984 in Taiwan metropolitan international hotels in as of 2006 deduced by the application of CCR and BCC respectively of DEA. The sequence of overall technical skill in respect of regional distribution is Taipei city, Hsinchu city, Tainan city, Taichung city and Kaohsiung ranking the last where their overall technical skill rankings are 0.934498、0.934017、0.894201、0.836523 and 0.804123 respectively. The efficiency improvement for international hotels of relatively low efficiency can be derived by slack variable analysis. Efficiency analysis evaluated by BCC (Banker, Charnes, and Cooper) model reveals that the average pure technical skill efficiency is 0.91546 and average scale efficiency is 0.974895. Further analysis regarding operation inefficiency discovers that hotels of non efficiency in term of real pure technical efficiency amount to 25, and hotels of real scale efficiency total up to 28. That shows insufficiency of scale efficiency is bigger than pure technical efficiency. Researches adopting sensitivity analysis are few normally. In respect of efficiency analysis results evaluated by sensitivity analysis adopted in CCR model, Taiwan metropolitan international hotels have competitive edges in such input factors as restaurant costs and such output factors as hotel room revenue and restaurant revenue but are competitive in such input factors as the number of rooms, ad campaign and employees’ wages. Understanding the competitive input and output factors owned by each hotel and offering decision-makers useful reference information can help each hotel maintain competitive edges. This research can provide the management some suggestions for decision making and futuristic management trends.
The Role of Employee Resource Groups for Different Sexual Orientation Employees in Corporate America
Gerald D. Hollier, Jr., University of Texas at Brownsville
This paper provides social and legal framework related to the validation of the Employee Resource Groups (ERGs) initiative of the Gay, Lesbian, Bisexual and Transgendered employees (GLBT) as a crucial step toward the realization of workplace equality. It presents and analyze data related to employment discrimination and employers health benefits. Then it proffers conceptual articulation about diversity Human Rights Campaign, followed by an application of the Corporate Equality Index. The paper argues that ERGs also help foster a sense of safety and acceptance for GLBT employees within the workplace. These groups provide a clear line of communication between GLBT employees and management, ensuring that policies and practices have their intended effect. GLBT ERGs have been involved in policy-making, providing input on marketing and workplace protection policies, attracting and retaining talented individuals, leadership development, cultural change and representation at external events. Today gays, lesbians are to be found in every type of organization, institution, career field, and profession. In the United States today, it is legal in 30 states (in the private sector of the workplace) to fire, decline to hire or promote otherwise discriminate against an employee merely because of his or her sexual orientation or perceived sexual orientation. At present no federal U. S. law addresses protection from discrimination on the basis of sexual orientation for private employers. Although not required by law, thousands of employers are already providing equitable benefits to their gay and lesbian employees. Many of the Fortune 500 companies and other successful businesses have been the quickest to adopt gay and lesbian inclusive policies.
Factors Influencing International Students’ Evaluations of Higher Education Programs
Dr. Jose María Cubillo-Pinilla, Universidad Politécnica de Madrid, Spain
Dr. Javier Zuniga, ESIC Business & Marketing School, Spain
Dr. Ignacio Soret Losantos, ESIC Business & Marketing School, Spain
Dr. Joaquín Sanchez, Universidad Complutense de Madrid, Spain
The purpose of this study is to analyze the factors that influence the decision-making process of overseas students. In particularly, we focus on the influence of the country’s and the institution’s image on the evaluation of the academic program. This study shows a positive relationship between the country’s and the institution’s image, as they are perceived by the prospective students, and how they evaluate academic programs. The country image exerts a strong influence on the institution’s image and, to a lesser extent, on the program evaluation. Likewise, a significantly positive relationship is evidenced between the institution’s image and the program evaluation. As a result of growth in the global economy and, in order to fulfill business requirements for international competence and skills, university graduates often evaluate the possibility of doing graduate studies abroad to improve their skills and capabilities. In recent years, the number of students in search of higher education (HE) programs abroad has risen sharply. Data from UNESCO for the 2002/2003 academic year showed that there are more than 2.1 million graduate students studying outside their home countries worldwide, a 40% increase from the 1.5 million in the 1989/1990 academic year.
Integrating Leadership Development and Continuous Improvement Practices in Healthcare Organizations
Stewart L. Tubbs, Ph.D., Eastern Michigan University
Brock Husby, The University of Michigan
Laurie Jensen, Henry Ford Health System
In the 2008 American presidential race, three issues were uppermost in the minds of voters, the economy, the war in Iraq, and healthcare. In addition, Zakaria (2008) makes a compelling case that all American business organizations are in danger of falling behind those of other countries, not because of our declines, but because the others are improving at a faster rate than we are. This paper addresses possible methods for improving American healthcare organizations using a Systems Approach (Tubbs, 2009; Atwater and Stevens, 2008). Specifically, it addresses proven methods for successfully integrating Leadership Development and Continuous Improvement practices in health care organizations. In 1999, the Institute of Medicine (IOM) published its report, To Err is Human (Corrigan and Donaldson, 2000), describing a fragmented United States healthcare system that was laden with errors and grossly unsafe for patients. The report detailed a comprehensive strategy by which government, health care providers, industry, and consumers could improve quality and safety with a goal of reducing medical errors by 50% in five years. The IOM followed up this report in 2001 with, Crossing the Quality Chasm, (Berwick, 2002; Leape and Berwick, 2008), outlining six dimensions of quality necessary to achieve its quality improvement goals. The six dimensions include: Safety, effectiveness, efficiency, patient-centered practices, timeliness and equity. Today, nine years later, there is still little evidence of any significant progress in achieving the 50% improvement sought by the IOM. All the while, the U.S. continues to spend more than twice the amount on health care as all other advanced industrialized nations, while falling well short of reaching achievable benchmark measures of quality care, underscoring the urgency for a new archetype necessary to transform quality and continuous improvement in healthcare (Anderson, 2004).
Complexities of Achieving a Single Pharmaceutical Market in the European Union
Aysegul Timur, Ph.D., Kenneth Oscar Johnson School of Business, Hodges University, Naples, FL
The pharmaceutical industry has complex characteristics not only in the European Union (EU) but also elsewhere. A defining difference from other industries is that several third parties, besides the manufacturer and consumer, are involved on both the demand and supply sides. Regulating the pharmaceutical industry is a particularly difficult challenge for policy makers, who seek low health care costs and affordable drugs, but also want accessibility to the highest quality medicines and more generally a successful industry. All these complex characteristics magnify the difficulty of achieving a single market in pharmaceuticals through the market integration process of 27 nations in the European Union. It is a fact that pharmaceutical policy is still primarily determined at the national level due to differences in health care systems, pricing and reimbursement regulations, but there is some evidence of movement toward “European Community” and the harmonization of markets across the members by the European Commission’s expanding role. The pharmaceutical industry in the European Union has experienced a great deal of effort on the part of member countries to harmonize disparities among health care systems and regulation and reimbursement practices. The purpose of this paper is to look for some evidence toward the achievement of the single market by looking at price differences in five major pharmaceutical markets. The results show that progress toward the achievement of a single market is evident in that price differentials are decreased over time between 1994 and 2003, relative to a low priced country, Spain.
Hybrid Approach in Neural Network Design Applied to Financial Time Series Forecasting
Dr. Chokri Slim, Manouba University, ISCAE, Tunisia
Artificial Neural Networks (ANN) has been successfully applied to the financial time series forecasting, which offer greater computational power than the classical linear models and detect the complex behavior in the data. Several algorithms have been proposed for choosing the best architecture and training the network. However, one of the main difficulties of ANN’s design is the selection of an adequate architecture. In this paper, we propose a hybrid approach based on genetic algorithm (HNN) for the selection of an optimal architecture and learning parameters for the ANN. The platform of evaluation of the novel model involves comparisons with Classical backprobagation Neural Network (CBNN), Stochastic Neural Network (SNN) and the Fuzzy Neural Network Model (FNM) .The daily data of the TUNINDEX from Tunisia Stock Exchange is collected for technical analysis. The results find that while the proposed HNN is running yielding nets that requires a lower computation cost and perform consistently well from classical search. Statistical methods and neural networks are commonly used for financial time series forecasting. Empirical study have shown that Neural Networks outperform linear regression, Arn (1993) and White (1992), since stock markets are complex, nonlinear, dynamic and chaotic Trippi (1996). Neural networks are reliable for modeling nonlinear, dynamic market signals Baum et al (1988) and Eiton (1993). Neural Network makes very few assumptions as opposed to normality assumptions commonly found in statistical methods. Neural network can perform prediction after learning the underlying relationship between the input variables and outputs. Back-propagation neural network is commonly used for price prediction.
Corporate Governance and Earnings Management: An Empirical Study of the Saudi Market
Dr. Mohammed A. Al-Abbas, King Khalid University, Saudi Arabia
This paper seeks to examine the association between corporate governance mechanisms and earnings management in the Saudi business environment, utilizing a sample of Saudi joint stock companies for 2005, 2006 and 2007. Earnings management is measured by current abnormal accruals using Teoh et al.’s model (1998).Regression analysis examines the relationship between earnings management and corporate governance variables (including board composition, board independence, separation between the responsibilities of the Chief Executive Officer (CEO) and the Chairperson and the composition and independence of audit committees). In addition, the auditors' size, and a number of other variables have been included to control for other influential factors. The results of the study provide no evidence that corporate governance factors mitigate against earnings management in the Saudi environment. However, auditing firm’s size negatively relates to abnormal accruals, which indicates that auditing firm’s size is an important factor with regard to the extent of earnings management. The results highlight the need to enhance the legitimacy of corporate governance in Saudi corporations. In addition, it provides insights into the audit quality role to mitigate against earnings management which, in turn, ought to be considered by audit committees in their decisions of selecting audit firms.
Copyright 2000-2017. All Rights Reserved