The Journal of American Academy of Business, Cambridge
Vol. 24 * Num.. 1 * September 2018
The Library of Congress, Washington, DC * ISSN: 1540 – 7780
Online Computer Library Center, OH * OCLC: 805078765
National Library of Australia * NLA: 42709473
Peer-Reviewed Scholarly Journal
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Journal of American Academy of Business, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC. The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. No Manuscript Will Be Accepted Without the Required Format. All manuscripts should be professionally proofread / edited before submission. After the manuscript is edited, you must send us the certificate. You can use www.editavenue.com for professional proofreading/editing or other professional editing service etc... The manuscript should be checked through plagiarism detection software (for example, iThenticate/Turnitin / Academic Paradigms, LLC-Check for Plagiarism / Grammarly Plagiarism Checker) and send the certificate with the complete report.
The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: firstname.lastname@example.org; Journal: JAABC. Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright 2000-2018. All Rights Reserved
Chick-Fil-A Goes International
Alexandra Crozier, San Houston State University, TX
Dr. Diana Brown, Sam Houston State University, TX
Dr. Joey Robertson, Sam Houston State University, TX
The company that brought us those likable illiterate cows is planning to expand internationally and to share their tasty chicken sandwiches with the rest of the world. This paper analyzes the potential markets that Chick-fil-A may enter as it expands globally. This paper examines the cultures, religions, and societal values in these new markets and analyzes whether these factors will serve to hinder or facilitate Chick-fil-A’s global march to market domination. How will Chick-fil-A successfully navigate the international marketplace and succeed in new markets including Islamic nations and countries like India and China? In Hapeville, Georgia, Truett Cathy and his brother Ben Cathy started a restaurant in 1946 under the name of Dwarf House. (Bovino, 2011). Eighteen years later, Truett Cathy came up with the now well-known chicken sandwich after having discovered the pressure cooker, which allowed him to fry a chicken breast in just four minutes. (Privco 2014). Using this innovative method, Truett Cathy opened the first restaurant in a Georgia shopping mall. This quick preparation approach and intelligent restaurant placement helped to pave the way for the food courts in modern malls all over America. (Bovino, 2011). Truett Cathy’s vision of Chick-fil-A is embodied in his assertion that “Nearly every moment of every day we have the opportunity to give something to someone else- our time, our love, our resources. I have always found more joy in giving when I did not expect anything in return.” (Cathy, 2011). In accordance with his beliefs on giving, in 1973, Truett Cathy started a scholarship program for Chick-fil-A employees. In the years to follow, Chick-fil-A became a leading example for fast food restaurants and chains by selling chicken nuggets all over the U.S. (Privco, 2014). Over thirty years after starting the Dwarf House, Truett and Ben Cathy finally opened their first free-standing Chick-fil-A restaurant (i.e., not in a shopping mall food court). (Privco, 2014). Five years later, Chick-fil-A spread to college campuses which became their “first brand licensing agreement.” (Privco, 2014). By 1993 Chick-fil-A had extended to 500 different locations. (Privco, 2014). Two years later the iconic Chick-fil-A cows made their debut promoting Chick-fil-A’s memorable slogan “Eat Mor Chikin.” With the rolling in of the new millennium, Chick-fil-A hit the billion dollar sales mark. (Privco, 2014). The following year they established their 1,000th restaurant in Georgia. (Privco, 2014). In 2006, Chick-fil-A hit the two billion dollar sales mark. (Privco, 2014). A few years later, Chick-fil-A ranked 22nd in the the top 25 “Customer Service Champs.” (Privco, 2014). Truett Cathy founded Chick-fil-A and instilled his southern roots and strong Christian traditions into his organization. Chick-fil-A's headquarters, which is outside of Atlanta, includes a statue of Jesus washing the feet of a disciple. (Green, 2014). Other religious artwork is on display in the large atrium at the entrance of the building, including Bible quotes and crosses. (Green, 2014). Since he opened his restaurant, Truett Cathy has openly incorporated Christianity into his business, from putting Bible quotes on the Styrofoam sweet-tea cups to closing the entire chain on Sundays. (Green, 2014). Chick-fil-A’s corporate mission statement is, “To glorify God by being a faithful steward of all that is entrusted to us and to have a positive influence to all who come in contact with Chick-fil-A.” (Green, 2014). Owner and founder Truett Cathy has explained that the decision to close on Sundays was based on his belief that the employees must be given the opportunity to spend time with their families and friends, to worship, or to rest as they see fit. (Cathy, 2011). Cathy claims this decision is “part of… [his]…recipe for success” and that this decision to be closed on Sundays was the best decision he made. (Cathy, 2011). Around 17 percent of U.S. consumers dine out at quick-service restaurants at least once a month and approximately 20 percent visit quick-service restaurants at least once a week. (Statistica, 2014). In 2014, the revenue of the fast food restaurant industry in the United States was $198.9 billion U.S. dollars. (Statistica, 2014). Interestingly, analysis of market data shows that Chick-fil-A generates more revenue in six business days then their competitors do in seven days. (Rennie, 2017). Specifically, since 2010, Chick-fil-A has led the fast food industry in average sales per restaurant, earning an average $4.8 million per restaurant in 2016. (Rennie, 2017). By comparison, McDonald's restaurants generated about $2.5 million in per-unit sales last year, and KFC's brought in about $1.1 million per restaurant. (Peterson, 2017). Overall, the company's sales have exploded, from $6.8 billion in 2015 to nearly $8 billion in 2016, marking 49 consecutive years of sales growth, according to Chick-fil-A. (Peterson, 2017). The Cathy brothers' explanation for Chick-fil-A being closed on Sunday relates both to their employees’ views on spiritual life and a demonstration that Chick-fil-A cares for its employees. (Cathy, 2011). The Cathy brothers believe that being closed on Sunday has attracted more value-based customers who appreciate Chick-fil-A’s firm Christian stance. (Cathy, 2011). Chick-fil-A is also known for their beloved, boisterous spokes-cows. In the late 1990’s Chick-fil-A added the mascots to support their advertising initiatives. (Privco, 2014). Chick-fil-A advertises on a market-by-market basis, rather than utilizing nationwide marketing campaigns. (Privco, 2014). Chick-fil-A is now looking to take its chicken sandwich, its Christian values, and its Holstein spokes-cows around the world. According to PrivCo, a Private Company Financial Report, the owners of Chick-fil-A have announced their desire for the restaurant to go international, stretching from Tokyo, London, and Mexico, to the Philippines. (Privco, 2014). Chick-fil-A’s primary competitor, at home and soon abroad as well, is the also-iconic McDonald’s. Since there is currently no international data available for Chick-fil-A available for review, this paper will utilize McDonald’s data for comparative analysis. Of course, most companies who invest in foreign markets tend to reshape their product and service to properly correspond to foreign markets. (Hackett, 2014). Although each firm must develop their own specialized marketing program in order to meet the demands of international markets, many franchise systems have successfully penetrated foreign markets with little alteration of the domestic marketing strategy. (Hackett, 2014). Thus, although there will be changes made to facilitate marketing overseas, in comparison to their domestic markets, it is possible that not a great deal will change. The main changes that will be encountered will be through the “menu additions and deletions as well as changes in esthetics.” (Hackett, 2014). Much like McDonald’s had to change their menu to suit their diverse customer base, Chick-fil-A may be forced to modify their menu. Thus, the over-arching question is, what will Chick-fil-A keep on their menu when they begin their global expansion? Once Chick-fil-A travels internationally, customers’ food preferences become the main deciding factor with regard to which restaurants survive and which do not. Oldakowski and McEwen describe American corporations that go abroad without adapting to suit the market of interest as suffering from “American arrogance.” (Oldakowski, 2010). The salient question then is, have other American fast food chains implemented this strategy of remaining consistent during international expansion, or have they been more willing to modify components of their corporate culture to become successful in foreign countries? (Oldakowski, 2010). Another well-known fast food chain - Kentucky Fried Chicken - was forced out of Bangalore, India when it first tried to enter the Indian market place. In Rick Dolphijn’s article “Capitalism on a Plate: The Politics of Meat Eating in Bangalore, India”, he explains that a riot broke out over a rumor that the vegetarian salads at Kentucky Fried Chicken were prepared using pork fat. Some said the uproar was due to “the brutal invasion of American culture.” (Dolphijn, 2006). The rumor was believed to have been started by the Brahmin, an elite religious class, who are adamantly against eating meat. (Dolphijn, 2006). For the latter part of the 90’s this elite class dominated the center of towns in India, making it very difficult for fast food restaurants to survive. (Dolphijn, 2006).
Dell Appraisal and the Business Judgement Rule
Dr. Donald G. Margotta, Northeastern University, Boston, MA
This paper addresses problems with certain valuation issues in mergers, and “fair value” calculations in appraisal proceedings. All the variables in valuation models have inherent uncertainties and these are discussed using specific illustrations from the 2016 Dell appraisal decision. The paper shows why these uncertainties are inescapable and why any conclusions based on them must depend on someone’s business judgment. Whose business judgment that should be is the critical question and after considering several alternatives the paper concludes that the judgment of a corporation’s directors should prevail, a well-established principle known as the business judgement rule. Numerous legal proceedings require estimating the financial value of some asset or corporate decision. For example, litigation in merger issues frequently focuses on the value of the target corporation. Appraisal litigation is a related, but different, procedure that requires the calculation of a “fair value” following the completion of a merger. Also, litigation in 10b-5 fraud litigation often focuses on the value of damages resulting from the impact on a company’s value resulting from a fraudulent or misleading statement. Bankruptcy proceedings also involve valuation issues. After evaluating several methodologies used in establishing the value of financial assets, this paper discusses the financial variables involved in such methodologies with the objective of clarifying why valuation differences exist and why they are inescapable. Among the methodologies examined are discounted cash flow, comparables analysis, and market prices. It shows why absolute answers to most valuation issues using any of these methodologies are impossible and why, therefore, someone’s business judgment must prevail. After discussing several alternatives for that dominance the paper concludes that the judgment of directors should prevail contingent, of course, on directors passing the usual judicial scrutiny of their due diligence, good faith, and loyalty in reaching their decisions. Estimates of the value of financial assets can vary widely. For example, in the attempt by Paramount Communications in 1990 to take over Time, Inc. the Delaware Supreme Court was presented with values for Time ranging from $208 to $402 per share, a range the Court said “…that a Texan might feel at home on.”1 An even wider range of values was submitted to the Delaware Chancery Court in the 2016 appraisal proceedings following Dell’s going private transaction in 2013. In that case at least 65 different estimates of Dell’s “fair value” were submitted to the Court, ranging from $7.25 to $27.05 per share. The percentage difference between the high and low end of these estimates is approximately 93% in the case of Time, and 270% for Dell. At the time a management buyout was first proposed to Dell on June 15, 2012 the market price of Dell was $12.18 per share.2 For comparison, the final price agreed to by Dell’s board on August 2, 2013 was $13.75. This price was approved by a 57% majority vote of shareholders (70% of shares voted) on September 12. 2013.3 The “fair value” awarded to dissenting shareholders by the Delaware Chancery Court was $17.62 on May 31, 2016. 4 Understanding why such wide ranging estimates of value exist requires an understanding of valuation models and of the variables that go into them. Such understanding should help decision makers such as judges and legislators see that decisions in valuation litigation ultimately rely more on legal grounds than on financial ones. To be sure, financial analysis is required, but choosing from the inevitably different results from financial analysis requires a legal judgment. Discounted cash flow (DCF) methodology is generally acknowledged to be the most rigorous and most widely used methodology for evaluating the value of a company. This discussion focuses on how DCF models are used to calculate value and includes a description of the methodology’s shortcomings to show why an “accurate” or “true” value of a corporation cannot be made using this or any other methodology and therefore why all valuations ultimately rely on the business judgment of someone. It does not include certain accounting issues that go into the procedure. The value of a corporation, or of any other financial asset, is defined in finance theory as the summation of discounted value of all future cash flows to infinity (∞) and is generally represented as follows (Ross, 374): In Equation (2) the variables are defined as in Equation (1) except in this case, when valuing a company, the end point “T” is now set to infinity. As shown in Equation (2), calculating the value of a company requires forecasting cash flows to infinity, an obviously impossible task, but there are mathematical models which, under certain circumstance, make the calculations easier, but no less imprecise.5 While Equation (2) suggests mathematical exactitude, the variables that go into it, the estimates of future cash flows to infinity, and the discount rate, “i,” are usually very imprecise and uncertain. Eugene Fama,6 a 2013 Nobel Prize recipient, also notes the uncertainty in discounted cash flow methodology saying, “Our guess is that whatever the formal approach, two of the ubiquitous tools in capital budgeting (discounted cash flow and payback) are a wing and a prayer, and serendipity is an important force in outcomes.” (Fama and French, 1997). If calculating the value of a capital project, whose duration is finite, is a “wing and a prayer” then valuing a corporation, which uses the same methodology but with a projected life of infinity, is even more so. Terminal value is the term used to describe the discounted value of future cash flows from a certain point to infinity. For example, if the constant growth period in this example starts after period 3 the terminal value is calculated as follows (Fruhan, 1992). TV3 is the value of the discounted cash flows at period 3, and represents the discounted value of cash flows from period 3 to infinity. And, since this is the terminal value at Period 3, it must be further discounted to the present, i.e. Period 0, when all the cash flows are finally discounted. The “g” and “i” in this equation, as discussed earlier, represents someone’s business judgment of an appropriate discount rate, “i,” and of an infinite growth rate, “g.” Terminal value is usually the largest component of a company’s valuation and worth noting here to underscore a major point of this paper, which is how uncertain the calculations are since terminal value, the most impactful of all the variables, is also the most uncertain of all because it depends on an estimate of growth rate to infinity. The growth rate in the terminal value calculation is usually the most critical of all the inputs in determining the value of a company, i.e. the value of its discounted cash flows. Although there are various ways to estimate a growth rate, it clearly is highly uncertain since the estimate is for the growth rate to infinity. Analysts typically base their estimates on some variable related to the industry in question. For example, population growth might be one factor in estimating the infinite growth rate of retail clothing company. In the case of Dell, management used its own internal growth estimates as well as those of outside experts specializing in the computer industry. Similarly, outside consultants and experts used estimates drawn from various industry sources, as well as management input. The following excerpt from Dell illustrates some of these estimates. During a meeting with analysts on June 12 and 13, 2012, management called for a strong performance from the enterprise solutions and services division, projecting that it would account for 60% of the Company‘s profits by 2016. Management anticipated 12% annual growth in software sales and 22% growth in services revenue. At the same time, management projected that the Company‘s end-user computing division would grow at a rate of 2% to 5% annually, and that even if end-user computing experienced a downside scenario of 5% negative growth annually, earnings and operating income from that division still would increase.7 The discount rate used in the DCF formula is also very uncertain. Most practioners use the weighted average cost of capital (WACC) of the target firm which, as the name suggests, is a weighted average of the cost of a company’s debt capital and equity capital (Ross 2016, pages 348-355). While estimates of the cost of debt capital can be reasonably accurate, the cost of the equity portion of a company’s capital is highly uncertain. The capital asset pricing model (CAPM) is probably the most common tool used for calculating the cost of equity. While a detailed explanation of the CAPM is beyond the scope of this discussion, the uncertainties surrounding it and other calculations are the focus of this paper and are discussed here.
Process Capability Analysis for Left-Skewed Distributions with Negative Values
Dr. John E. Knight, Professor, The University of Tennessee at Martin, TN
Dr. Daniel L. Tracy, Professor, Beacom School of Business, The University of South Dakota, SD
Dr. Mandie R. Weinandt, Beacom School of Business, The University of South Dakota, SD
Process capability measures have gained widespread acceptance for statistical process control in both manufacturing and service organizations. The calculations apply extremely well for distributions that approach normality. However, traditional calculations can yield poor results when the data comes from non-normal and/or highly skewed distributions. The primary factor that indicates poor results (when in fact that may not be the case) is the inflated estimate of the standard deviation that results from skewed data. Although the underlying distribution histogram may indicate few defectives, the process capability index will tend to indicate higher percentages of defectives than the actual data distribution. Thus, when the distribution is not normal and/or skewed, accurately evaluating process capability requires larger samples, alternative calculations, or data transformations. Data transformations attempt to convert original data to approximately normal data. Typically, transformations will be applied in a sequence of increasing ability to remove the skewness from the data. The progression of the square root transformation, the logarithmic transformation and the inverse transformation is applied and usually generates a reasonable normal distribution of transformed data. These procedures have been extensively applied to a variety of distributions with considerable emphasis on right-skewed distributions with strictly positive values. This paper addresses a methodology of finding appropriate capability indices when the data are left skewed and contain some negative values from non-normal distributions. Process capability analysis represents the relative ability of process output to meet the specifications set forth by the customer. The most common capability index is referred to as Cpk. The Cpk has become an industry wide standard for evaluating vendor quality and customer selection. The higher the Cpk, the greater the probability that the distribution of process output measurements will fall within the specification limits. Many customers today expect acceptable quality levels associated with reasonably high Cpk values (greater than 1.5). More discriminating customers are only satisfied with high quality levels associated with even higher Cpk values. Very high Cpk values generally indicate that inspection of individual parts is unneeded and unwarranted which is attractive from both customer satisfaction and operational cost perspectives. Additionally, the submission of a capability index from each potential supplier provides the customer with a quantitative measure of which customer provides the highest quality, thus moving the concept of quality from a qualitative concept where all customers proport to have the highest quality to a quantitative measure from each supplier where each can be compared to one another. Additionally, the capability index from a potential supplier can be compared to the internal goal for quality that has been established by the buyer. A capability index of 1 provides the customer with a 99.73% chance that each production piece will be within specifications. A Cpk of 1 still leaves 27 parts in 10,000 that are potentially mixed into the production output. Most companies today require a minimum Cpk of 1.5 and oftentimes 2. A Cpk of 1.5 indicates that only 7 out of one million will be defective and mixed into the production while a Cpk of 2 indicates that only about 2 out of 1 billion will be mixed into the production output (virtually theoretically non-existent). The goal is to decrease the units of defective output to zero with 100% confidence. Once that goal is achieved, the expensive process of 100% inspection to sort the good parts from the defective parts can be eliminated. Such quality is one of the basic premises of just-in-time production systems since extra parts will not be needed in storage and in inventory. A generally accepted axiom recognizes that capability indices have value only if the underlying process is in statistical control producing predictable results. In addition, standard capability index calculations assume that the distribution of the individual measurements from the process is approximately normally distributed. The normality assumption means that the standard deviation of the distribution of measurements is unaffected by extreme values. In the presence of extreme values in one tail of the distribution (skewed data), the variance is inflated due to the small number of extreme values. Assuming approximate normality, the standard procedure for Cpk calculations utilizes the long-term process mean µ (estimated by x ̿ from control charts) and the process standard deviation (estimated by the R ̅⁄d_2 modification of average ranges from control charts). However, when initial production samples are developed for bidding submission on a new contract, a relative large sample size of the preliminary production is evaluated and the basic statistics are generated from that sample. Obviously, without a contract, the production run cannot be established for process control charts on a production lot basis. Regardless of the methodology for developing the sample mean and standard deviation, the theoretical distribution of individual process values is between The value of the Cpk will be the minimum of the two values generated and that value reflects the specification limit that will be most difficult to meet and above or below which most of the defectives will be generated. This calculation works well for distributions that approach normality. However, when the distribution of individual measurements is a highly skewed process (or represents a significant departure from a normal distribution in some way) standard deviation estimates are inflated and lead to artificially low Cpk values. Additionally, the indicated percentage of defectives in the production output may be overstated. The assumption of the normality of data distributions is a common assumption but seldom occurs in practice. Perfectly normally distributed data rarely exist in practice. To some extent, this assumption departure necessitates that capability indices to be interpreted with some statistical judgement. In the past, various approaches to compensating for non-normal and/or skewed distributions have been employed: larger sample sizes, heuristic methods, and data transformations. The use of larger sample sizes is simple to accomplish from a computational perspective considering the ever-increasing speed and power of today’s computing resources. However, the issues related to the data collection itself can be prohibitive in terms of inspection costs and production/service delays. Larger sample sizes are the easy answer mathematically, but not necessarily the right answer in practice. Heuristic methods and data transformations that rely on computing power may be a better use of resources to get the desired quality management analysis for informed decision making. Heuristic methods have used approximations to estimate process variation. Chang and Bai developed weighted variance (WV) control charts (Chang, 1995) and weighted standard deviation (WSD) control charts (Chang, 2001). Chan and Cui (2003) developed the skewness correction control chart. While these methods respectively addressed the problems of non-normality and skewness, Tsai (2007) developed control charts and capability indices for the skew-normal (SN) distribution. This control chart approach works well in situations with low to moderately skewed data. Tsai (2007) also suggests that data from highly skewed distributions would need to undergo data transformation prior to using SN control charts. Once appropriate control charts are developed and process control is established, a Cpk measure can then be effectively employed. Data transformation has historically been used to convert highly skewed data to be representative of an approximately normal distribution. The reduction of non-normality through data transformation has been shown to result in better interpretation of the resulting statistical analysis. One example used the Box-Cox transformation to illustrate the potential of transformation on Type 1 error and the power of Hotelling’s T2 (Kirisci, 2005). Another application tested a variety of transformations to manipulate flow cytometry data (Finak, 2010). Data transformations examined included generalized hyperbolic arcsine, bi-exponential, linlog, and generalized Box-Cox with the best results coming from bi-exponential and generalized Box-Cox. Although this is a very specific research context, it still demonstrates the advantages of using data transformation as a method of approximating normality. Many alternative capability indices have been developed over time for use in special circumstances. For a historical review of these efforts, please see Kotz (1998). The use of many of these nonparametric indices required large sample sizes and either over- or under-estimated true process capability depending on the nature of the underlying distribution of quality data as shown by Wu (2001). Van den Heuvel and Ion (2003) examined the predictive capacity of capability indices to accurately predict non-conforming items under conditions of non-normality and skewness. Specifically, Van den Heuval referenced the capability indices developed in Munechika (1986) and Bai (1997). The methods used by Munechika and Bai performed better than traditional control charts, but had difficulty accurately predicting proportions of non-conforming items, particularly for high levels of skewness – Munechika penalized the indices too much and Bai too little. Van den Heuval’s adaptive capability indices provided upper and lower bounds for the proportion of nonconforming items for a variety of non-normal distributions including: t, logistic, gamma chi square, exponential, log-normal, and Weibull. An additional non-normality challenge arises in quality control data when the data is naturally zero bounded, particularly if a lower target is desirable. Inevitably in this case as the relentless pursuit of continuous quality improvement pushes the target toward the zero bound, the data distribution becomes artificially skewed by a truncation at zero. Lovelace (2009) proposed a technique that results in accurate capability indices in the company of appropriately and similarly constructed control charts. Lovelace utilized data from the Delta distribution, a variant of the log-normal.
Public-Private Partnership Advantages
Nathaniel Ford, Jacksonville University, FL
Dr. Julius Demps II, Jacksonville University, FL
Dr. Gordon Arbogast, Jacksonville University, FL
This paper analyzes infrastructure projects, which were publicly funded versus those using a public private partnership (P3) to determine which funding vehicle provides cost savings and/or quicker project delivery. The intention is to ascertain if using a public private partnership is a superior way for infrastructure projects to be funded based on either cost, time to delivery or both. Data was gathered from Canadian projects spanning a timeframe from 2002 until 2013. The collected data was from multiple sources including Transportation, Health, Utility, and Corrections industries. Calculations were reviewed and compiled in a sample size excess of 100. If the project met the VfM criteria to enter into a P3 relationship, the results indicate a strong relationship between projects funded by a public private partnership and cost savings as well as expedited time to delivery. As a result, the null hypothesis was rejected in favor of accepting the alternative hypothesis i.e. there is a relationship between cost savings and projects funded with a public private partnership. In the United States, there is one dominant model of execution for public infrastructure projects, which is fully funded and executed by the government; federal, state and local. Projects such as hospitals, college dormitories, transportation and electrical infrastructure are typically financed through the Public sector. The U.S., once a global leader in infrastructure competitiveness, no longer ranks in the top 10, and there is a growing need for infrastructure expansion and repair. Given that we are in an era of limited federal, state and local funding for infrastructure, an alternative model, known as a Public-Private Partnership, or P3, has gained popularity and is being used to bridge the state and local governments’ resource gaps. Our current fiscal constraints have many, on both sides of the political aisle, adopting legislation and programs that support P3s. PPP Canada, an independent element of the Canadian Government, defines the Public Private Partnership as a long-term, performance-based approach to procuring public infrastructure where the private sector assumes a major share of the risks, in terms of financing and construction and ensuring effective performance. This mechanism is viewed as an enhancement to the financial attractiveness of projects and one that increases the likelihood that public infrastructure projects can be executed in a faster time frame with less cost to the Public. (Deye, 2015), indicated that from 2005 to 2014, forty-eight infrastructure P3 transactions with an aggregate value of $61 billion reach the formal announce phase and of those, forty transactions or over 83% successfully closed. (Deye, 2015), also indicated though many of these projects are in the northeast, the country is showing an enhanced adoption rate of the P3 model. The Canadian Council for Public-Private Partnerships (2011) demonstrates the effectiveness of the P3 delivery method. The intention of this research is to show P3 delivery modifies the aspects of a public infrastructure project compared to the traditional delivery, known as Public Sector Comparator, or PSC. Much has been written about the state of public infrastructure in the United States. President Donald Trump has signaled that infrastructure will be a top priority in his administration. His policy documents indicate a focus on pursuing public/private partnerships, and other prudent funding opportunities to put “America’s Infrastructure First”. In the United States, P3 projects are evaluated and completed on an ad-hoc basis. There is no common set of requirements for P3 project execution and there is no central repository for data collection and performance monitoring. This makes evaluation of P3 vs. PSC projects in the United States problematic, since there is often no baseline comparison of data available. Foreign countries such as Korea, Australia and Canada have central repositories and governmental agencies who set criteria using uniform comparison models for P3 projects. Evaluations using the VfM model, efficiently selects project for execution strategy (P3 or PSC), by using many factors to evaluate performance of the project over time. For the purposes of this paper VfM is defined as the optimum combination of whole-of-life cost and quality of the good or service to meet the user’s requirement. The VfM assessment takes into consideration all stages of a projects life cycle including the projects feasibility study, project selections, and project evaluations. In general there are six determinants of VfM which are risk transfer, long-term nature of contracts, competition, performance measurements and the use of any output specifications, performance measurements and incentives and private party management skills. The VfM is an industry standard metric which has no specific formula but rather takes into consideration factors specific to each project and is dependent on the specific project under consideration. Due to political similarities, geographic similarities, engineering similarities, and availability of data the authors of this paper chose to model the analysis of P3 versus PSC using Canadian statistics and will reference evaluations using the VfM variable. VfM can be shown over time alongside delay costs. The combination of VfM and delay costs equal the total savings of a P3 vs PSC comparison. VfM + Delay Costs = P3 Advantage. Established in 1993, the Canadian Council for Public-Private Partnerships (CCPPP) is a federal program of the Canadian Government. Since the inception of the CCPPP they have been collecting and verifying data on all P3 projects in cooperation with the provincial and municipal governments. Since 2002, the CCPPP has had full transparency into all financial metrics and has analyzed all P3 versus PSC project. Canadian data available from CCPPP will be used to evaluate the advantages of P3 or PSC delivery methods. The typical structure of any project is demonstrated in the following graphic. The CCPPP is known as the Special Purpose Vehicle and organizes the P3 or PSC projects. Will Public Private Partnerships (P3) funded infrastructure projects provide cost savings, time to delivery savings or both? A comparison of the project cost and time to delivery of P3 funded infrastructure projects vs. public sector funded projects. Available data was reviewed and collected. Variables were selected from the data set, calculating of the sum total of the critical values and percent of private contribution vs. percent public contribution by sector. Additionally, calculations were made for return on investment using tax benefit and government investment costs. Next, calculations were made using VfM by project and project capital cost to determine an overall savings rate for P3 execution. This sequence of calculations provided evidence of the positive ROI for government dollars invested measured in tax benefit and provided the savings percentage in using P3 executions. The most important aspect of completed P3 projects is the project size in terms of total project dollars. To better understand the data, each project was grouped into bin sizes showing the relative complexity and scale of the project. The overall size of the project will affect many aspects, including duration, involved parties, and level of government funding all leading to certain complexity of the project. Therefore, the bin sizes are not proportional in terms of dollar size, but represent levels of complexity.
Evolution of the Cluster Concept and its Application to Tourism
Antonia Canto, University of the Azores, Portugal
Dr. Joao Couto, University of the Azores, Portugal
The cluster concept evolved over time and can be applied to various industries as it allows companies to reduce transaction costs, improve the quality of their products, and, thus, enhance the brand of the business group. Due to differences in economic and cultural progress, regional attractiveness, competitiveness, and the quality of life of the population, the tourism cluster differs from other clusters of companies and institutions. It is noted that this is made up of tourist-attraction companies, infrastructure companies, and all government entities. The present article seeks to contribute methodologically to the existing literature, since it focuses its analysis on the evolution of the cluster concept and its applicability to tourism, and pays particular attention to this business organization in the case of Portugal. The concept of clusters can be defined in different ways; however, Porter (1998) identified it as a geographical concentration of companies or institutions, contiguous to each other, connected by similar and complementary factors. These are essential to regional development because they manifest positive externalities, in particular, they contribute to productivity growth, business performance, innovation, and competitiveness (Kachniewska, 2014). All companies that constitute the cluster conduct their duties in the same area of activity, eventually representing it as a value-added production chain. It creates an environment of trust between companies, reducing transaction costs, and increasing the competitive advantages of the group (Iordache et al., 2010). The organization of clusters can be applied to numerous branches of economic activity, particularly in tourism, distinguishing itself from the others as the conditions considered are focused on economic and cultural progress, on attractiveness, competitiveness, and the quality of life of, not only tourists, but of the inhabitants of a certain geographical region (Cunha and Cunha, 2005). In this context, this article contributes theoretically with a methodological definition for analyzing a cluster, bridging the lack of information that exists between the relationship of tourism and its organization by clusters. The article is divided as follows. Section 2 describes the literature review of the cluster concept, of the various forms of action, and the respective advantages and disadvantages. Additionally, it clarifies the definition of a tourism cluster, such as its structure and its procedure. Finally, it identifies the issue as it relates to the Portuguese reality. Section 3 summarizes theoretical contributions and sets out clues to future investigations. The cluster concept has evolved over time. To better fit this concept, it is important to understand how it should be implemented. First, it is essential to define the policies and mechanisms associated with the basic principles of the cluster. Then, it is necessary to generate a set of indicators relating to the establishment of its operation. Soon after, it is essential to observe what design tools are used to develop it properly, in order to provide benefits to the economy. Finally, the relations among the partners must be strengthened, in order to make the cluster more efficient (Iordache et al., 2010). Enright (1996) began by defining a cluster as an agglomeration of companies that were close to each other, while more recently the authors Sölvell, Ketels, and Lindqvist (2008) specified it as a group of companies and institutions located in a particular region, related to each other by product or service. Porter (1998) points out that the theory of clusters acts as an intermediary between the networks and the competition. These clusters of companies develop in a geographic locale, where the physical contiguity between the institutions ensures confidence and increases interactions. Through figure 1, it can be understood that the cluster concept has evolved over the years; in this way, several authors have defined it according to three dimensions, namely, the location, the synergies, and the institutions. (Insert Figure 1 here) The presence of the clusters provides mainly positive externalities in the productivity of companies; these emerge from the knowledge and existing work links between industries and technologies (Kachniewska, 2014). In addition to these geographical concentrations, companies and institutions have similar and complementary aspects, namely (Jankowska and Pietrzykowski, 2013): collaboration and competition (1); they may be centered in one or more geographical region (2); they are specialized in a particular industrial sector and are related by technology (3); they are focused on science or traditional knowledge (4). Despite the diverse design of clusters, their success depends on existing interpersonal relationships and community involvement in their creation. These corporate concentrations hold different forms, separate planning methods, and have their own problems (Mytelka and Farinelli, 2000). In general, six types of cluster were developed (figure 2), which allowed them to display the incorporated advantages, utilize possible political implications, and benefit from competition and cooperation using liaison links (Littvova, 2014). Likewise, it is possible to distinguish four types of cluster according to the different levels of maturity (Sohn et al., 2017). First, there is the informal cluster that is composed of micro and small companies with minimal qualifications and little developed technology. As they do not cooperate, economic performance is weak and unstable. Subsequently, the intermediate cluster is made up of small and medium-sized enterprises. Although there is still no cooperation, the technology adopted is updated. Then comes the organized cluster, which welcomes small and medium-sized enterprises with modern management practices and advanced technology. The tendency for cooperation among institutions is average, allowing companies themselves to be unaggregated. Finally, the innovative cluster assumes the opening of information channels, the fragmentation of the production process, and the development of synergies among stakeholders.
The Impact of the SEC’s Indecision Regarding IFRS Migration on the Readiness Efforts of U.S. Issuers and Accounting Faculty
Donald Buzinkai, Fairleigh Dickinson University, NJ
The final SEC IFRS work plan staff report issued in July 2012 was essentially silent regarding the path forward for the use of IFRS in the U.S. Utilizing new survey data, this study investigates what impact the SEC’s indecision has had on the readiness efforts of U.S. issuers and faculty. Our results highlight that, despite some initial progress and a continued belief that the SEC will eventually mandate IFRS, issuers are delaying their readiness efforts until the SEC is more definitive with an IFRS migration plan and time line. Conversely, our research highlights that despite an increase in U.S. faculty uncertainty regarding IFRS, faculty behavior surrounding IFRS has not changed significantly. We also find that issuers whose auditors are Big 4 firms are more prepared for IFRS than issuers whose auditors are non-Big 4 firms. While convergence efforts by the IASB and the FASB have resulted in modified standards on both sides that have reduced differences (Langmead & Soroosh, 2010), some convergence efforts have been discontinued or have resulted in different standards because the IASB and FASB could not agree (Pacter, 2013), and some have been delayed in their implementation. A November 2011 SEC staff paper confirmed general observations that despite convergence efforts, many differences between U.S. GAAP and IFRS remain (Poon, 2012). In July 2012 the SEC issued its final work plan staff report noting that the IASB has made substantial progress in improving the comprehensiveness of IFRS, but that gaps remain such as the development of industry-specific accounting standards and the global application and enforcement of IFRS, and that questions remain on how the U.S. regulatory environment will be impacted. The report stated that “it became apparent to the Staff that pursuing the designation of the standards of the IASB as authoritative was, among other things, not supported by the vast majority of participants in the U.S. capital markets (Kaya & Pillhofer, 2013, p. 278).” Notably, the final report did not provide a recommendation for a U.S. adoption of IFRS (Kaya & Pillhofer, 2013) leaving an unclear view of a potential U.S. adoption. The objectives of our paper are to examine the impact of the SEC's indecision on U.S. issuer and faculty IFRS readiness efforts, and to assess whether audit firm size influences issuer readiness. The 2012 SEC report highlighted that issuers generally support the idea of a single set of high-quality global accounting standards but expressed concern about how much change the U.S. financial reporting system could absorb. A 2012 survey of U.S. accounting firms showed that the majority of survey participants support the movement towards IFRS and 48% recognize the benefit of IFRS in increasing global investors’ confidence (Jurkowski, Sen, & Starnawska, 2014). However, support for IFRS is outweighed by the concerns of a low rate of U.S. companies being ready for an IFRS migration (Jurkowski et al., 2014). The 2012 SEC report highlighted that of approximately 10,000 issuers, most have little knowledge of IFRS requirements and many would prefer a managed transition by which the FASB would incorporate IFRS into U.S. GAAP (Tysiac, 2012). Consistently, in response to a 2011 AICPA member survey asking whether their company was prepared to adopt or support IFRS adoption, 0% of members working for U.S. public companies responded “ready,” and only 9% responded “active;” the remaining majority were in “evaluating,” “adopting not begun,” “in preliminary discussion” or “N/A” categories. Additionally, a 2011 PwC study asked issuers how long they thought it would take their companies to transition to IFRS. Thirty-eight percent responded "three to five years" or "greater than five years" while 41% responded "one to two years" or "less than one year." The remaining 21% selected “Not sure or N/A.” The outlook for a potential U.S. IFRS adoption remains unclear. In August 2014, IASB Chairman Hoogervorst responded to an audience member that full convergence is no longer achievable, but later in the same month, IASB Vice Chairman Mackintosh called global accounting standards “desirable, achievable, and inevitable”(Amato, 2014). SEC Chairman White stated that determining whether to further incorporate IFRS into the U.S. financial reporting system continues to be a priority for her and that she hopes “to be able to say more in the relatively near future (White, 2014).” With respect to convergence efforts, in May 2014 the IASB and FASB released a converged revenue recognition standard. However, each organization is independently moving forward with different approaches on lease accounting and are experiencing difficulties in converging on accounting for financial instruments (Amato, 2014). The lack of a clear decision from the SEC regarding an IFRS migration appears to be increasingly causing companies to delay their IFRS preparations. A 2009 Deloitte study reflected that 45% of the companies delayed plans to perform an IFRS assessment due to the delay in the finalization of the SEC’s Roadmap (Deloitte, 2009). In the fall of 2009, when AICPA members were asked if their companies were delaying their IFRS preparations until the SEC announces a decision on IFRS, 61% of respondents from public companies and 29% from private companies answered “Yes.” And as Table 1 illustrates, by the fall of 2011, these figures grew to 80% and 48%, respectively (AICPA, 2011). The SEC’s 2012 report left issuers without a time line, and even though the SEC may ultimately require IFRS reporting in the U.S., the issue is binary for issuers as U.S. GAAP are the required standards to be applied until such time when IFRS are required or permitted for use. Hypothesis 1 – The SEC’s indecision has caused issuers to further delay their readiness efforts for their company’s migration to IFRS from U.S. GAAP. The Big 4 firms have traditionally been involved in standard setting, and in the case of IFRS, have been generally considered to be the keepers of best practices. Big 4 firms have the capability to advise their clients on IFRS, as evidenced by the plethora of IFRS content on their websites. A 2011 survey showed that during the first years of IFRS application, interviewees considered it normal for auditors to work with their clients to undertake the mission correctly. Big 4 audit clients pre-tested IFRS before implementation, set an IFRS implementation calendar and were supported by the Big 4 in their implementations, whereas companies with local auditors did not set an IFRS implementation calendar and did not conduct training and pre-testing (Albu, Albu, Fekete Pali-Pista, & Cuzdriorean Vladu, 2011). Hypothesis 2 – Issuers whose auditors are Big 4 firms are more prepared for a U.S. IFRS migration than issuers whose auditors are non-Big 4 firms. A transition to IFRS would represent a transformational change to the profession. In the 2009 KPMG/AAA survey, 50% of respondents reflected that less than 25% of their faculty teaches, or is engaged in preparing to teach, IFRS. Thirty-four percent responded that 26 to 50% of their faculty teaches, or is engaged in preparing to teach, IFRS, and 16% answered that more than 50% of their school’s faculty teaches, or is engaged in preparing to teach, IFRS. Only 3% of respondents planned to hire IFRS-ready faculty, reflecting that existing faculty will need retraining (Munter & Reckers, 2010). The majority of respondents expected faculty to attend training funded by their college or university, and surveyed department chairs reflected a greater expectation of college or university funding than did regular faculty (KPMG, 2009). Perspectives seemed to have changed when a 2012 study found that “most chairs and deans expect faculty to train themselves through CPE courses offered by the larger public accounting firms and reading the newer edition textbooks” (Bandyopadhyay & McGee, 2012, p. 86). Through interviews of accounting faculty, another study highlighted the IFRS training that faculty are receiving has not included pedagogical methods to teach principles-based accounting standards (Santos & Quilliam, 2013). There is a high level of uncertainty among U.S. accounting faculty regarding a U.S. adoption of IFRS. According to the 2009 KPMG/AAA survey, 34% of respondents felt that a convergence of U.S. GAAP to IFRS with substantial equivalency would not be achieved until after 2015 and 9% felt that U.S. GAAP would be continued indefinitely (KPMG, 2009). “Some schools are taking a wait-and-see approach while others are informally integrating IFRS at the undergraduate level and still others are actively partnering with CPA firms in offering a separate IFRS course to graduate-level students (Bandyopadhyay & McGee, 2012, p. 83)." Yet despite the lack of clarity from the SEC, evidence suggests that U.S. faculty are committed to teaching IFRS (Jackling, 2013), because for colleges, the issue of readiness for IFRS is less binary than it is for issuers as the schools determine their curricula. In 2009, 68% of surveyed accounting faculty were very confident or confident that the U.S. will adopt IFRS while 32% were not confident or were not at all confident (Munter & Reckers, 2010). A refresh survey in 2011 highlighted waning faculty confidence that the U.S. will adopt IFRS at some point (KPMG, 2011). With the 2012 SEC report, we anticipate that the level of uncertainty will have further increased. However, because the issue is less binary for colleges and universities than it is for issuers, we do not expect the increased level of uncertainty to significantly impact the behaviors of these academic institutions. Hypothesis 3 – Increased U.S. accounting faculty uncertainty regarding a U.S. migration to IFRS stemming from the SEC’s indecision has not resulted in significant modifications in the approach to teaching IFRS in U.S. colleges. Since the 2012 SEC report, the periodic IFRS surveys completed by the AICPA and the large surveys conducted by PwC and Deloitte were not refreshed. Therefore, to test our hypotheses we conducted two new surveys – one to issuers and one to accounting faculty. Through the use of an online survey tool, 893 surveys were electronically mailed to finance and accounting leadership and staff of U.S. public and private companies, and foreign public and private companies. This group of participants is consistent with the aforementioned PwC and Deloitte studies as well as with the AICPA member surveys when the accounting firm responses are excluded from the AICPA member surveys’ results. To facilitate comparisons with previous surveys, questions were posed on a nearly-verbatim basis with the applicable previous study. Unique to our survey was the inclusion of the following question directly addressing our research question: "A July 2012 SEC report failed to provide a definitive time line or migration path for U.S. companies to adopt IFRS. Has this report impacted your company’s plans to prepare for an IFRS implementation?" Of the 893 distributed surveys, 52 (6%) were initiated and 44 (5%) were completed. Of the 44 completed surveys, 43% were completed by respondents working in U.S. public companies, 32% for U.S. private companies, 18% for foreign public companies, 5% for foreign private and 2% “other”. Seventy-five percent of respondents stated that they work for multinational companies and 80% stated that they were CPAs. In addition, approximately two-thirds of the respondents stated their company was audited by a Big 4 firm.
The Relationship between Environmental Strategies and Environmental Performance: The Role of Green Intellectual Capital
Dr. Ming-Jian Shen, Takming University of Science and Technology, Taiwan (R.O.C.)
The public concern of environmental issues poses enormous challenges on firms, calling for more active involvement in environmental protection. Although the influence of environmental strategies on environmental performance has been documented, the mediating mechanisms are largely unexplored. This study proposes and examines an intervening model that explores if proactive environmental strategies enhance environmental performance by accumulating green intellectual capital. This study collected data from top management team members in 123 firms of the Top 5000 enterprises in Taiwan. Results of structural equation modeling showed that proactive environmental strategies were positively related to green intellectual capital, which in turn leaded to better environmental performance. More importantly, the relationship between environmental strategies and environmental performance was partially mediated by green intellectual capital. In the past several decades, large-scale production and the overuse of natural resources result in global ecological and environmental crises (Shrivastava, 1995). The deteriorating natural environment threatens mankind development and sustainability, increasing global awareness of environmental protection. The public concern of environmental issues poses a specific challenge on firms, calling for the active involvement of managers to adapt their strategies and management practices toward environmental protection (e.g., Buysse and Verbeke, 2003; Hart, 1995; Henriques and Sadorsky, 1999; Murillo-Luna et al., 2008; Porter and van der Linde, 1995; Sharma and Henriques, 2005). Firms can reduce their negative impacts on the natural environment and further gain competitive advantages by addressing environmental sustainability (Hart 1995; Porter and van der Linde 1995). Meeting demands from various stakeholder groups such as legislators, environmental organizations, customers, and communities (Gadenne, Kennedy and McKeiver, 2009; Murillo-Luna, Garces-Ayerbe and Rivera-Torres, 2008) and pursuing superior economic results (Judge and Douglas, 1998), firms may adopt proactive environmental strategies to promote good environmental performance; in many cases, however, enormous effort results in limited success. It is arguable appropriate to attribute the gap between environmental strategies and environmental performance to the firm’s inability to accumulate intellectual capital. Being responsive to the demands of environmental protection and further becoming environmental sustainable, firms need intellectual capital on the path of greening (Chen, 2008). In the knowledge-based economy, firms should shift from acquisition of physical resources to accumulation of intangible resources for environmental sustainability. Proactive environmental management practices help firms institutionalize knowledge and codify experience to reduce their “footprint” on the natural environment (Berry and Rondinelli, 1998; Hart, 1995; Murillo-Luna et al., 2008). Therefore, intellectual capital may be vital for firms to tackle the challenge of being environmental sustainable. However, the mediating role of intellectual capital in the implementation of environmental strategies is unexplored in the literature. This study intends to fill in the research gap by exploring the role of intellectual capital, namely green intellectual capital (GIC), in the relationship between environmental strategies and environmental performance. Based on the natural-resource-based view (Hart, 1995; Nehrt, 1998) and the literature of intellectual capital, this study examines the relationship between environmental strategies and environmental performance, and more importantly, investigates the mediating role of GIC in the environmental strategy-performance link. In the following sections, we first review the relevant literature and establish our research hypotheses. Then, methodology of this study is introduced and results of empirical evidence are presented. In the last section we discuss research and managerial implications of this study and highlights limitations and future research directions. Previous studies found that promoting and cultivating environmental protection awareness or environmental protection information search capabilities would be positively helpful for the intellectual capital of enterprises (Chen, 2008). With the implementation of green management, enterprises would accumulate intangible assets relating to environmental protection and green innovation from the natural resource-based view (Hart, 1995; Nehr, 1998). Green intellectual capital (GIC) is an intangible asset and is an important variable in this study. The concept of this variable was first proposed by Chen (2008), and was applied in the research into the electronics and information industries in Taiwan. The research findings suggested that the GIC accumulation process can allow enterprises to comply with international environmental protection conventions and consumer demands on environmental protection awareness, to better establish competitiveness. Moreover, this study integrated green innovation and environmental management with intellectual capital to develop a new concept of GIC, and categorized GIC into green human capital (GHC), green structural capital (GSC) and green relational capital (GRC) (Bontis, 1999; Chen, 2008; Johnson, 1999). Green human capital helps enterprise employees to accumulate knowledge, technology, capabilities, experience, attitude, wisdom, creativity and commitment relating to environmental protection or green innovation in order to enhance competitiveness (Chen, 2008). Green structural capital helps enterprises to accumulate environmental protection or green innovation-related organizational capabilities, organizational commitments, knowledge management systems, salary systems, information technology systems, databases, management organization operational procedures, corporate image, trademarks and copyrights to enhance competitiveness (Chen, 2008). Green relational capital help enterprises and customers, suppliers and partners promote interactions relating to environmental protection or green innovation to enhance competitiveness (Chen, 2008). This study suggests that environmental management should be similar to the level of environmental strategies in practical operation (Murillo-Luna et al., 2008). This study examines the relationship between environmental strategies and GIC from the perspectives of stakeholder theory and the natural resource-based theory. For example, improvements in business-community relations and corporate image start with the viewpoints of stakeholders and must be in line with their demands. However, the viewpoints and practices of improving energy usage efficiency and reducing cost and waste discharge are aligned with the natural-resource-based views of Hart (1995). New ways of reducing pollution include hardware (e.g., equipment, instruments and manufacturing process technology) and changes in operating methods (e.g., raw material recycling, product design) to create market demands and lower costs, while balancing pollution prevention. The usual procedure is to change end-of-pipe treatments into the application of pollution reduction technology and the cultivation of employees with new ideas about environmental protection in areas such as cost reduction, sales improvement, and further pollution reductions in products and the manufacturing process. In the implementation of environmental strategies, employees with environmental protection knowledge and expertise are cultivated to construct environmental management technologies and new processes to develop and design green products. Meanwhile, improvements to the corporate image and positive interactions in external relations are also conducive to GIC accumulation. As mentioned, environmental strategies will increase the production and accumulation of green relational capital. Based on the foregoing discussion, this study argues that the implementation of environmental strategies by enterprises is positively related to GIC from a green perspective. Hence, we propose the following hypothesis: H1: Environmental strategies are positively related to GIC accumulation. Human capital theorists (e.g., Becker, 1964) have suggested that the enhancement of staff skills, knowledge and capabilities can usually be translated into improvements in organizational performance. As staff has a high level of knowledge and skills, they can create new innovations and technologies contained in the manufacturing equipment and process, as well as improve customer relations. Literature on organizational learning has further pointed out that as organizational learning expands the organizational knowledge base and scope of potential activities (Daft and Weick, 1984), organizational learning (organization with relatively high human, customer relational and structural capital) will have better responses and adjustment capabilities to changes in the external environment, and thus be capable of supporting organizational performance. Similarly, information processing theorists (Galbraith, 1973) have argued that intellectual capital can enhance organizational performance, as it can improve organizational information processing capabilities through creating customer relations (customer relations capital) and information system investment (structural capital). Bontis (1996) indicated that intellectual capital refers to the effective use of knowledge and information. Although it is difficult to be recognized and fully exploited, it can provide enterprises with a resource basis for competition once it is developed, transferred and used, thereby improving enterprise business performance. Empirical studies have provided that intellectual capital creates values by organizational internal development, and accumulates them into the enterprise structure, maximizing the leverage of wisdom and knowledge between employees and customers. Intellectual capital has developed into an important tool with economic value for the future growth of enterprises (Kuo and Wu, 2008). According to the knowledge-based view (Grant, 1996), environmental strategies, green environmental protection innovation GICs and enterprise environmental performance should be positively related. Some researches classified green innovation into green production innovation and green procedure innovation, and conducted studies on competitiveness (Chen, Lai, and Wen, 2006). The results suggested that these two types of innovations are both positively related to enterprise competitiveness. Empirical studies have showed that through correct assessment and by following environmental trends, GIC accumulation and improvements can bring competitiveness to enterprises, and can produce significant improvements in the relevant environmental performance of organizations (Chen, 2008).
Copyright 2000-2018. All Rights Reserved