The Business Review, Cambridge

Vol. 24 * Number 2 * December 2016

The Library of Congress, Washington, DC   *   ISSN 1553 - 5827

Online Computer Library Center   *   OCLC: 920449522

National Library of Australia * NLA: 55269788

Peer Reviewed Scholarly Journal

Most Trusted.  Most Cited.  Most Read.

All submissions are subject to a double blind review process


Main Page   *   Home   *   Scholarly Journals   *     Academic Conferences   *   Previous Issues   *   Journal Subscription


Submit Paper   *     Editorial Team   *    Tracks   *   Guideline   *   Sample Page   *   Standards for Authors / Editors

Members  *  Participating Universities   *   Editorial Policies   *   Jaabc Library   *   Publication Ethics

The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Business Review, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work.  All submissions are subject to a double blind peer review process. The Business Review, Cambridge is a refereed academic journal which  publishes the  scientific research findings in its field with the ISSN 1553-5827 issued by the Library of Congress, Washington, DC.  No Manuscript Will Be Accepted Without the Required Format.  All Manuscripts Should Be Professionally Proofread Before the Submission.  You can use for professional proofreading / editing etc...The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. 

The Business Review, Cambridge is published two times a year, December and Summer. The e-mail:; Website: BRC  Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via our e-mail address.. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.

Copyright 2000-2018. All Rights Reserved

Independent Accountant Opportunity for Wealth Management Reporting on Crowdfunding Engagements

Dr. Michael Ulinski, Pace University, Pleasantville, NY

Dr. Roy J. Girasa, Pace University, Pleasantville, NY



The researchers examined the statutory provisions of crowdfunding as a type of liquidity for business startups. The opportunity for local and regional CPA firms was noted as larger CPA firms may not be agile as smaller sized firms to handle reviews needed in crowdfunding engagements. Both clients receiving funding from this new source of capital and intermediaries charged with researching the viability of projects could use specialty firms able to finish due diligence review requirements in a timely manner.  Conclusions were drawn and recommendations for firms interested in a fast growing field of wealth management. The financial crisis of 2007-2009 that occurred in the United States and for a longer period globally that led to the highest level of unemployment since the Great Depression of the 1930s caused a major rethinking in Congress concerning how to deal with the crisis. Legislatively, the Dodd-Frank Act(1) sought to curb the abuses within the financial system with its 1,000 page, multiple titled divisions that encompassed perceived abuses and causes of the crisis particularly by banking institutions. As a result, the statute resulted in major overhauling of substantive financial sectors that also included the Volcker Rule(2) that essentially prohibited banks from engaging in risk-oriented investment activities such as in hedge funds, probation (not successful) of “too-big-to-fail” banks, reform of credit rating agencies, the creation of the Financial Stability Oversight Council (FSOC)(3)to regulate financial sectors of the economy that may create financial danger to overall U.S. financial stability, protection of consumers, and other provisions. On the opposite side of the ledger in part, Congress addressed the need to foster greater employment opportunities and did so be lessening regulatory restrictions for new start-up companies in order for numerous investors to add substantial liquidity in relatively small sums for their promulgation. In this paper we will discuss crowdfunding and in connection thereof, the role of dark pools and venture capital. We are particularly concerned with the perceived abuses and the regulatory environment that seeks to lessen fraud and other abuses that inevitably accompany diverse financial strategies. Crowdfunding refers to investments, other than by more traditional means of raising capital, by a substantial number of persons with respect to particular, mostly new, projects. In past years such funding most often occurred by investments by venture capitalists who assumed substantial risks in the hope of attaining more substantial financial rewards with respects to innovative ideas that appear to have financial merit. Although venture capital funding continues to be an important source of capital for newly arising business ventures, crowdfunding has now overtaken venture capital as a major source of financing. Statistically, crowdfunding rose from $6.1 billion in 2013 to $16.2 billion in 2014, and projected $34.3 billion in 2015. Venture capital investments constituted approximately $30 billion in the said comparable time frame.(4) Crowdfunding, albeit a new legally recognized method of raising capital for entrepreneurs, actually has roots severally centuries past but a later date initial use occurred in 1997 when a British rock band, desiring to accomplish a reunion tour, requested and received funds online from fans. It led to the formation of ArtistShare which, according to its website, is a platform connecting creative artists to fans to play a role in the creative process and fund creative artistic activities.(5) It was the first fan funding platform.(6)  Crowdfunding constitutes an investment of capital in order to seek a profit through the efforts of other persons and thus comes within the parameters of the SEC v. W.J. Howie Co. test, requiring unless exempted to registration requirements with the Securities and Exchange Commission (SEC).(7) The exemption of crowdfunding from such registration requirements was given birth by the enactment in 2012 of the Jumpstart Our Business Startups Act.(8) The Act is composed of seven titles, with “Crowdfunding” constituting Title III thereof(9). In essence, the Act permits an exemption from the substantial filing requirements with the SEC mandated under Section 4 of the Securities Act of 1933(10).  Securities Exemption Provisions. The Securities Act of 1933 was enacted as a result of Congressional investigations that uncovered significant fraud during the halcyon days of the “roaring 20s” when securities were sold to unsuspecting investors which often were based on nothing but the “blue sky” one viewed rather than by potential or actual assets and profits. Thus, with certain major exceptions, the statute requires that investors receive financial information from the issuer in order to better evaluate whether or not to provide capital in the hope of receiving financial profit through the efforts of other persons. Issuers, generally through their underwriters, brokers, and dealers, provide the relevant information by means of a registration process wherein pertinent information of the proposed security is filed with the Securities and Exchange Commission (SEC), created under the Securities Act of 1934, and which is available to the public on the Commission’s website EDGAR. Initial purchasers of the offered security are given a detailed prospectus with the said pertinent information.  There are a number of exemptions and exempt securities to the registration process that include the following:  Exempt securities include:  Short-term commercial paper not more than 9 months; Not-for-profit organizations;  Insurance products and policies (regulated by state insurance departments); Common carriers;  Banks and savings and loan associations;  Government securities; and Securities by a receiver or trustee in bankruptcy with prior court approval with respect to corporate reorganization.  Exempt transactions include: Intrastate offerings (incorporated in state of issuance; 80% of proceeds, assets, business and all purchasers are within the state; no resale for 9 months);  Regulation A which exempts issuers which offers up to $5 million of securities in any 12-month period thereby permitting much less detailed and time-consuming offering statement and circular;  Rule 504 which permits nonpublic companies (closely held companies) to offer securities up to $1 million in a 12-month period to accredited investors without need of registrations and disclosure requirements but are restricted as to resales; Rule 505 that permits public and nonpublic companies to offer up to $5 million of securities in a 12-month period to unlimited accredited and up to 35 n0n-accredited investors with prohibitions on advertising and solicitation, and no resales before 6-months; Rule 506 which permits public and nonpublic companies to raise an unlimited amount of money from unlimited number of investors and up to 35 non-accredited investors but who are to receive information including audited financial statements.


Improving Quality Using Plackett-Burman Screening Designs

Dr. John E. Knight, University of Tennessee at Martin, Martin, Tennessee



The improvement of product quality can be achieved more effectively using a sequential methodology that includes experimental design as suggested by the six-sigma philosophy {Pande, Neuman, and Cavanagh (2000), Breyfogle (2003).  Chowdhury (2001), Lucas (2002)}.  Other well-known systematic improvement methodologies developed by Qual Pro Consulting of Knoxville and Joseph Juran (Goetsch, 2014) have different numbers of steps but are equally effective.  The final goal of these methodologies builds toward finding break-through improvements using designed experiments that identify and optimize statistically significant factors that influence product quality in light of the many potential ideas that are available to investigate.  Plackett-Burman designs (Tyssedal, 2008) are multivariate fractional factorial arrays that strive to identify statistically significant main effects while hinting of possible interactions.  The designs also offer the advantage of great reductions in the sample sizes needed to identify significant factors.  These multifactor design of experiments provide far greater analytical ability than traditional one factor at a time testing.   This paper will demonstrate the usefulness of the multifactor design principles as compared to one factor at a time testing.  The approach will be illustrated by the successful application of the principles using a case example in the carbon electrode manufacturing environment.  The introduction of systematic quality improvement methodologies such as six-sigma greatly enhanced the logic and organization of statistical improvements in quality.  Although many of the individual steps for improvement were previously known, a logical sequence of steps that maximized the probability of improvements added new analytical potential.  The sequential steps focused on defining key variables with operational definitions, using repeatability and reproducibility techniques for developing measurement accuracy and precision, achieving statistical control, determining process capability, and testing for significant improvement effects using multivariate statistical experiments,  The incorporation of multivariate statistical testing greatly added to problem solving ability.  Historically, many industrial experiments were simple one-factor-at-a-time tests (called OFAT testing) that relied on the principle of simple cause and effect.  The concept was to stabilize the process (get the process in statistical control) and then vary a single experimental factor.  The effect of that factor would then be judged by viewing the control chart and calculating the numerical effect of the changed factor (either the mean or standard deviation). Although this methodology is simple to understand and calculate, the methodology is far less effective than testing multiple factors simultaneously. Many deficiencies exist in one-factor-at-a- time (OFAT) testing.  First, a major assumption being made is that all of other many factors are “constant” as suggested by the control chart.  Seldom does this condition actually exist in complex processes.  Although the control chart may in fact be in statistical control, the inherent variation in the process as calculated by the control chart is the compilation of the variation in all of factors.  Therefore, the inherent standard deviation of the process being calculated is large given the myriad of factors potentially varying at any one time.   Further, the one factor being tested is not robustly subject to the other forces at play in the system and thus is not tested in the context of noise from other variable factors. Another major assumption being made is that the optimal answer lies along the line of the factor being varied and tested for significance.  In essence, OFAT testing evaluates line values and not surface contours.  Further, since OFAT testing is basically testing differences in statistical means (the control chart mean versus the experimental mean), the sample size to detect reasonably significant differences would be in excess of 30 or more.  If seven different factors were to be tested independently using OFAT, then over 210 samples would be needed and there would still be no testing of interaction effects even though many samples have been taken.  Finally, OFAT limits the probability of finding a significant effect in a limited testing period since the test only evaluates a single factor at a time.  If 10 factors were to be tested, each would need to be tested sequentially rather than simultaneously, thus increasing the total testing time and total sample size needed.  This added testing time introduces the possibility of unanticipated system changes over the testing cycle.  Testing of multiple factors at one time (also called Design of Experiments or DOE’s) is more difficult to conceptualize but the methodology is more successful at quickly finding significant causal factors in a shorter period of time.  Additionally, the results are more robust while making more efficient use of sample sizes.  As a residual benefit, DOE’s are also able to suggest potential statistical interactions between factors (something that single factor testing cannot achieve).  Since many factors are being tested at one time, many consider that too many factors are changing simultaneously to ever sort out the separate effects.  However, special experimental designs called fractional factorial designs have been developed that allow for “screening” of likely statistically significant main effects from multiple factors with a very limited number of recipe tests.   The Plackett-Burman models are fractional factorial designs of a special class called 22 designs.  Screening designs are extremely useful in suggesting which few of the many tested factors will significantly affect the response variable.  The designs also suggest which factors may also be experiencing interaction as they affect the response variable.  These fractional factorial designs overcome many of the deficiencies of OFAT testing.   In these multivariate tests (called MVT tests), each factor is varied within a given experimental recipe. Thus, each factor is being tested under the “not all things constant assumption”.  Additionally, the series of recipes actually replicates the testing of different specific points on the entire contour surface of possible answers.  The experimental results represent a topology of the surface contour and indicate the potential direction for optimal values.  Additionally, the MVT tests mentioned utilize the total sample size efficiently as each factor is tested at its experimental extreme levels in exactly half of each recipe and in half of the total experiment.  Thus, each sample from each recipe is being used efficiently to determine the effect of each factor.  For example, to test for the statistical significance of 7 different factors, eight recipes with 4 replications would require a sample size of 32 units.  OFAT testing would have required about 7 experiments with 30 units each or 210 samples.  Finally, since multiple factors are being tested simultaneously, the probability of finding at least one significant effect in the experiment is extremely high even if the probability of any one single factor being significant is low.  For example, if the probability of a single factor being significant is 25%, the probability that at least one statistically significant factor will occur in the experiment is 1 – (.75)8 or .90 or 90%!


Building Trust and Agreement in Negotiations

Dr. David A Robinson, RMIT Asia Graduate School, Vietnam

Dr. Kleanthes Yannakou, RMIT Graduate School of Business and Law, Australia



This article expands the theme of ‘meta-modelling’ to embrace an aspect of negotiation theory that never seems to date. To skilfully craft solutions that not only give negotiating parties a short-term ‘win’ but also build a foundation for long-term mutual benefit must surely be the quintessential prize sought by organizations and governments engaged in diplomatic relations and negotiations. But why is it so seldom achievable in one-on-one negotiations between individuals or small groups, whether business, community, or personal relationships?  This question has been pondered by many and remains one of the most important aspects of leadership and management. This paper seeks to answer it by integrating negotiation styles theory and traditional wisdom about how to negotiate with allies and adversaries within the values journey meta-model. It examines how ultimate collaborative (win-win) solutions can be brought to fruition when trust and agreement are forged in equal measures within a context of high-level shared values represented by the third paradigm of the values journey model. Negotiation was addressed as one of the themes in the meta-model series (Robinson, Morgan and Nguyen, 2016) and negotiation styles have previously been re-positioned within a values framework (Robinson and Nguyen, 2016), thereby providing a framework by which to predict an individual’s negotiating position in an effort to pre-empt their propensity and ability to seek collaborative outcomes. It was concluded that individuals living higher-level values will be best-placed to win in any negotiation. That being the case, it presents the axiom that when both parties enter into negotiation from a high-values base there is a high propensity for both parties to win. A propensity for collaboration within a stable long-term business relationship has been termed alliance capability (Anand and Khanna, 2000) and been associated with strategic competitive advantage (Ireland, Hitt and Vaidayanth, 2002).  This paper further integrates traditional leadership and management wisdom surrounding negotiation strategies with particular regard to allies, adversaries, bedfellows, fence-sitters and opponents (Block, 1987). The primary aim is to conceptualise how Block’s stakeholder categories and corresponding negotiation strategies relate to the negotiation styles proposed (Robinson and Nguyen, 2016). A secondary aim is to expand the scope of the values journey meta-model by illustrating how Block’s model is aligned it. Previous work by Robinson and Nguyen (2016) combined negotiation and personal values, indicating how each of five negotiating styles has congruence with particular steps in the values journey. Two main implications emanated from their work in this field: Firstly, if the value station can be discerned, the negotiation style and preferred outcome can be pre-empted. Secondly (and alternatively) when a person’s negotiation style is known then their values can also be discerned.  Based on the illustrative congruence between values and negotiating styles depicted in Figure 1, it follows that collaborative-style negotiations correspond to integrative-synergistic values, known as high-level values. The desirability of negotiating collaboratively contains an underlying assumption that win-win outcomes are possible to attain if both parties have the will to pursue them. Whilst Robinson and Nguyen (2016) proposed the use of a Synergy Star model to facilitate the attainment of win-win, it was also argued that the very act of engaging in consensus building can lead to an increase in understanding of own and others’ needs and goals, thereby creating trust, even if agreement is ultimately not attained.  Is it reasonable to expect a negotiating process with allies to be the same as with adversaries? Common sense tells us that it is not. Furthermore, there is evidence to support the view that a firm’s ongoing development of alliance management capability affects attitudes and behaviours among its employees in favourable ways (Ring and Van de Ven, 1994; Doz, 1988, 1996; Arino and De la Torre, 1998; Dyer and Singh, 1998).  So what are the differences? Block (1987) developed a model depicting how trust and agreement combine to describe the essential differences between allies and adversaries. By placing trust and agreement on X and Y axes respectively, he was able to define four quadrants and clearly differentiate negotiating partners. As illustrated in Figure 2, adversaries fall into the quadrant defined by low trust and low agreement, while allies are positioned in the quadrant defined by high trust and high agreement. It can be noted that an ‘adversary’ is similar to an ‘opponent’ in terms of low agreement but differs by virtue of having lower trust. ‘Allies’ are similar to what he termed ‘bedfellows’ in terms of agreement, but differ in terms of trust. A fifth set of negotiators, sitting somewhere between the ‘adversaries’ and the ‘bedfellows’, is known as ‘fence-sitters’. This is group is characterised by low trust but could be swayed to either agree or disagree.  It is widely believed that effective negotiators aim for a win-win outcome where all parties achieve their goal within their value system or, as a worst case outcome, at least a win without alienating anyone. That aim is perfectly aligned when the other party is already an ally. To render it possible in other cases though, it requires the ability to build and obtain support from coalitions and alliances consisting of a range of interdependent stakeholders who may hold a variety of views ranging from support to opposition.  Block indicates that agreement or conflict can occur over the vision, or project purpose, goals and requirements. In essence though, the skilled negotiator needs to master the process of being able to convert ‘opponents’, ‘bedfellows’ and ‘adversaries’ into ‘allies’. In the case of ‘opponents’ trust is already present, so agreement will be based purely on mutually beneficial outcomes. It could therefore be stated that the key to achieving ‘agreement’ is the establishment and improvement of ‘trust’ among the ‘bedfellows’ and ‘adversaries’. But negotiating agreement cannot be seen as an exercise in consequentialism.  Block’s warning that trust can either be built or destroyed on issues of fairness, justice and integrity’ (1987) should be taken seriously due to the important role of trust in the negotiation process.


The Impact on Firm Value of LIFO Adoptions Revisited

Dr. John R. Wingender, Jr., Creighton University, Omaha, Nebraska

Dr. Thomas A. Shimerda, Creighton University, Omaha, Nebraska

Dr. Thomas J. Purcell, Creighton University, Omaha, Nebraska



In this paper the impact of the corporate decision to switch their GAAP inventory valuation to the LIFO (Last In, First Out) method.  Research from 30 to 40 years finds significant positive abnormal returns from the adoption of LIFO.  However, economic conditions were very different then with the high inflation rates in the 1970s than in the 21st Century.  We replicate these studies with data starting in 2000.  In our sample we find a significant positive impact on firm value from LIFO adoptions which is surprising given the low inflation environment of this sample.  Traditional work on the impact of firm value from managerial decisions to change GAAP postulates that accounting changes do not change firms’ cash flow, thus should have no impact on firm value.  As the Literature Review section recounts, most all tests of accounting changes with event methodology indicate no statistically significant change in firm value as measured by the average abnormal return on the event date of the change in the accounting method.  The exception to the rule has been switches from FIFO (First In, Last Out) method to LIFO (Last In, First Out) method.  There are many reasons for this finding.  The main reason is that switching to LIFO in high inflation times leads to immediate costing to increase, with no change in actual cash outflow or change in cash value of inventory.  An increase in accounting expenses leads to lower earnings before taxes.  This leads to lower taxes, which is a lower cash outflow.  This results in higher after-tax cash flow today.  Thus there is a direct impact on cash flow without any change in overall risk which should lead to increased firm value today.  Although the accounting changes washout over time, the impact on the time value of money from getting cash sooner rather than later is significantly positive.  A conceptual case can be made for the use of LIFO in some inventory settings, such as when the nature of the inventory assets acquired, stored and used results in a physical flow best characterized by the last items in as being the first transferred to customers.  For example, businesses that deal in unrefined ore would more than likely add new purchases to the top of the pile and also take from the top for use in its operations.  LIFO primarily has been adopted in the United States not on conceptual grounds but due to its tax deferral advantages.  Since 1939 the Internal Revenue Code has allowed taxpayers to use the LIFO method to calculate taxable income, with the requirement that the adopting taxpayer implement LIFO in reports to shareholders and other users (the so-called LIFO conformity rule).  Taxpayers may adopt LIFO without requesting advance permission from the IRS, but once adopted, advance permission is required to discontinue LIFO.  LIFO matches current costs of inventory against revenues generated from sales of that inventory.  As a result, balance sheet inventory amounts generally are lower, especially during periods of rising prices for replacement goods.  If business price cycles are fluctuating, LIFO will tend to smooth the impacts and decrease the likelihood that unrealized holding gains and losses in beginning of year inventory items will be recognized.   An unavoidable consequence of adopting LIFO for tax advantages is that reported income will generally be lower than if FIFO had been used.  International Financial Reporting Standards (IFRS) do not allow the use of LIFO.  LIFO provides benefits during periods of rising prices.  Price levels of inventory components in the U.S. economy have not risen significantly in recent periods, suggesting that LIFO adoptions should be waning.  However, as the data below indicate taxpayers are still adopting LIFO.  This study does not attempt to identify the motivations for LIFO adoptions.  Over time, taxpayers that implement LIFO will report lower taxable income than if FIFO had been used, thus building a balance of deferred taxes (commonly called the LIFO reserve).  Discontinuing LIFO (or liquidating the LIFO reserve through contraction of operations) will result in additional tax due as the reserve is decreased.  Reduction in the reserve through normal operations is included in income currently, but for taxpayers who receive IRS permission to discontinue LIFO the tax due on the accumulated reserve is spread over four taxable years.  There is current evidence that taxpayers are discontinuing LIFO for a variety of reasons.  This study does not address LIFO discontinuations.  Annual deferred federal income tax revenue from the use of LIFO is approximately $5 billion.  Should LIFO be repealed, as was proposed in the President’s budget submission for FY 2017 (Department of the Treasury, 2016), estimates are that federal income tax revenues would increase by more than $81 billion over the ten-year budget projection period (Joint Committee on Taxation, 2015).  Politically there is an incentive for government repeal of LIFO, as an alternative to taxpayer initiated changes to conform with IFRS (if and when convergence occurs), because repeal can accelerate the timing of cash flows from tax payments while convergence would result in the 4-year spread in most instances.  In the first comprehensive study of accounting changes, Ball (1972) examined stock price reactions to over 20 types of accounting changes.  Included were LIFO adoptions. Ball concluded from an examination of cumulative excess monthly returns that "changes in accounting techniques do not appear to be associated with market adjustments in a consistent direction for the average firm" (1972, p. 23).  However, the results indicate that firms adopting LIFO exhibited cumulative excess returns of +7.0 percent over the 12 months preceding the change.  Ball concluded from his results that investors can anticipate most accounting changes.  Sunder's (1973) study was the first to focus exclusively on stock price reactions to LIFO changes.  His primary samples were composed of 119 firms which adopted LIFO. Sunder viewed these results as consistent with the hypothesis that on average the stock prices of firms that adopt LIFO increase because of the accompanying tax advantages. 


Evaluation of Questionnaire for Transfer Pricing Issue of SMEs in Europe

Dr. Veronika Solilova, Mendel University, Brno, Czech Republic

Dr. Danuse Nerudova, Assoc. Prof., Mendel University, Brno, Czech Republic



Although SMEs present more than 99 % of enterprises acting in the non-financial business sector in EU and contribute significantly to national and global economic growth, they are facing a lot of obstacles resulting into the higher compliance costs of taxation and lower participation on the international markets. Our research focused on the transfer pricing of SMEs and its compliance costs, which present one of the obstacles which SMEs are facing. The current approach of transfer pricing for SMEs and its related costs were evaluated based on the results of the questionnaire performed in Europe. Based on the results we can concluded that SMEs would appreciate the introduction of specific measurements for transfer pricing which would decrease their increased compliance costs of transfer pricing. Their costs for managing of general transfer pricing requirements were estimated up to EUR 2,000 per year, however, in case of documentation up to EUR 6,000 per year.  The European Commission (2003) defines the Small and medium-sized enterprises (hereinafter SMEs) according to the number of employees, turnover or balance sheet total as enterprises which employ less than 250 employees and have an annual turnover of less than EUR 50 million, and / or their balance sheet total is less than EUR 43 million. The European Commission (2015) states that SMEs present 99.9 % (i.e. 22.3 million) of all enterprises acting in the non-financial business sector in 2014.  Although SMEs contribute significantly to national and global economic growth (i.e. 28 % of GDP in EU28), they are facing a lot of obstacles such as increased level of regulation, reduced availability of skilled staff, 27 different tax and accounting systems and others. Even that many tax and administrative requirements may appear to be relatively “neutral” for business of all size, these requirements include higher fixed costs associated with tax and compliance regimes, as SMEs do not possess enough human and financial capital for coping with these issues contrary to the large enterprises. Therefore, as regard the international aspects of SMEs, European Commission (2010) states that only 5 % of SMEs have subsidiaries abroad contrary to 44 % of SMEs which perform international activities, such as exporting, importing, investing abroad, cooperating internationally, or having international subcontractor relationships, within the EU.  This situation is affected by the fact that international activities and having subsidiary abroad are related with the international taxation issues, transfer pricing, problems with cross-border loss compensations and higher financial costs and business risks. Therefore, many governments introduce measurements mainly in tax area, such as tax preferences, special provisions, specific tax rules and simplification measures for SMEs to reduce these negative impacts.  The aim of the paper is to evaluate the current approach for the transfer pricing issues of SMEs and their compliance costs based on the results of the questionnaire performed for European enterprises. The paper presents the results of the research in the project GA CR No. 15-24867S „Small and medium size enterprises in global competition: Development of specific transfer pricing methodology reflecting their specificities“.  Generally, international transfer pricing is subjected to the strict tax regulations which is related to high compliance costs of taxation. In EU transfer pricing compliance means adherence to the arm's length principle in line with Art 9 of the OECD Model Tax Convention and in line with the OECD Transfer Pricing Guidelines for Multinational Enterprises and Tax Administrations (hereinafter as TP Guidelines) that provide guidance for the application of the arm's length principle to the pricing for tax purposes and to the cross-border transactions between associated enterprises. However, as mention Solilova and Nerudova (2016) TP Guidelines make no direct distinction between types or sizes of MNEs, i.e. all enterprises, regardless of their size, are subject to the same principles and recommendations. Moreover, the costs associated with transfer pricing matters can be disproportionately large for SMEs in comparison to LEs for both the taxpayer and the tax administration. In this respect, Solilová, Steindl (2013) add that the International Agreements on the Avoidance of Double Taxation manage the arm's length principle (Art. 9) and its related statements (Art. 25) differently for individual countries resulting into the higher transfer pricing compliance costs. Therefore, Solilova and Nerudova (2016) and Silberztein (2013) highlight that the approach “one-size fits all” in case of SMEs dealing with transfer pricing issues is not possible and therefore the introduction of simplified measurements are recommended by them.  As regard as tax compliance costs, Chittenden et al. (2000) state that tax compliance costs are regressive to the size of enterprise, particularly these costs are hundred times higher for SMEs than large enterprises. Further, Cordova-Novion and De Young (2001) add that those costs are in-creasing over time. Evans (2003) state, that the highest compliance costs are in case of personal income taxes, corporate income taxes and VAT and they reached amount between 2% and 10% of the revenue yield from those taxes, up to 2.5% of GDP and are usually multiple of administrative costs. Moreover, Cressy (2000) and Nerudová et al. (2009) emphasize that those costs tend to grow for businesses active across borders. In addition, Sandford et al. (1995) state that those costs can decline international competitiveness due to its prohibitive effect.   Whereas the current researches focus on the compliance costs of taxation, there is any study or research focusing on the estimate of the compliance costs of transfer pricing issues in case of SMEs. Therefore our research is beyond the current research.  To research the approach for transfer pricing issues of SMEs and compliance costs a questionnaire was prepared. The questionnaire contains altogether 33 questions focusing on the company´s identification such as its size, its location and the nature of its operations, on transfer pricing measurements in force, on tax compliance, on compliance costs of transfer pricing and time needed for transfer pricing requirements, and on the tools serving for decreasing of compliance costs of transfer pricing issues (for details see table 1 below).  


Teaching Economics, In-class versus Online Effectiveness

Dr. Doina Vlad, Seton Hill University, Greensburg, PA



This research paper looks into the advantages and disadvantages of switching from traditional in-class teaching of economics, to online teaching. The research data comes from student evaluations and surveys. Some advantages from the online class delivery format noted by students are: time saved not having to travel to and from school, especially during the wintertime and for night classes; the advantage of having recordings available for them, so they can listen to them as many times as needed until they feel confident in mastering the material; students enjoyed learning more about the technology and new software, which are transferable skills to the modern workplace; increase student self-confidence and the ability to work independently in an online environment.  For future research I want to include student assessment measures and compare the learning achieved in the regular face-to-face classes to results achieved by students in the online courses.  Let’s take a walk on one of the big universities campuses and look around; what we’ll probably see are buildings, parks, a Student Center, sports arena, and many buildings and places meant to make students feel comfortable and "live the true life of a student." That happened to me as well while in Graduate School. I remember one of my "take a break from studying" routine during a cold day was to "get lost" in the Student Center lounge, many times with a cup of coffee in front of a TV watching something that wasn't really interesting, but relaxed me; or during a sunny day, walking around the lake and sitting on the benches and looking at the water, that relaxed me as well. Fast-forward 15 years later; how do students relax, interact, and what do they expect from the "college experience" today?  Firstly, cell-phone and computer based technology provides some choices for them: the daily time spent on Facebook, Twitter, and many other related virtual activities, results in them spending less time for real physical interaction among students. Secondly, the economic environment is tougher, with college costs increasing every year, doesn't make it a choice for many students to be only full time students, they have to work and be full time students. Also, they have to do it in the same 24 hours we used to be students only. In this type of environment, it is no wonder all the expensive buildings and facilities that the universities spent lots of money on are not used the way they were intended. So, what is the future of higher education? No one really knows, although we can speculate. Part of the speculation is the feeling that everything in the higher education environment moves faster now, due mostly to newer technological changes. When you open up the world and allow information to flow freely or at a very low cost, the question of the value of traditional education comes naturally. Add to that, the pressure of high costs associated with earning a degree. At this point, you have to consider ideas floating around on how to change existing models to make learning and earning a degree more convenient and more affordable. What is most impressive, however, is the pace of change: from “Massive Open Online Courses” (MOOCs) to competency-based education, blended courses, and flipped classrooms. All of these choices "tests the waters” for a new model of academic teaching, driven by the feeling that the higher education is in dire need of change.  The decrease in demographics is an approaching reality that has been affecting the student population sizes during the last few years. That decrease will continue to take place for many more years in the future. In this environment, universities have to fight really hard for student enrollments. Some of them -especially the small schools- have to become very creative to be able to keep their doors open.  There is a growing body of literature on online education learning outcomes and student learning satisfaction.  Wiechowski and Washburn (2014) have examined course satisfaction scores and student learning outcomes for more than 3,000 course evaluations from 171 courses during the 2010 and 2011 academic years. The results reveal surprisingly, that blended and online courses had a stronger correlation with high course satisfaction than did regular in class courses. Moreover, the study shows students received the same learning outcomes regardless of the teaching delivery approach.  Some universities try to differentiate themselves among peer institutions by using a common template in designing their course offerings. Onodipe, Ayadi, and Marquez (2016) analyzed the efficiency in delivering the Principles of Economics course when using a standardized course design template implemented to meet Quality Matters (QM) standards. Other studies look into the fast changing environment in academia, as a chance to grow and thrive in the profession. Navarro (2015) researched new creative ways that the economics faculty can provide value-added results and differentiate their product as teachers, in a world where online education is advancing rapidly. He looked into the determinants of pedagogically sound course design and into more complex forms of faculty-to-student and student-to-student interactions, peer assessments, and virtual office-hours solutions. I believe that universities are still experimenting with different solutions demanded by the new generation of student learners who were born and socialized in an era of expansion of virtual technologies. For example, the flipped classroom could provide the best of both worlds: the face-to-face classroom experience and the active learning approach based on videos and recordings by the instructors. This is the research topic Caviglia-Harris (2016) explored in her paper when she analyzed the effectiveness of the flipped classroom approach to instruction delivery in undergraduate economics courses. She mostly used the existing Khan Academy videos; her research results showed a 4 percent to 14 percent improvement, when measured as by scores obtained by the students at their final exam for the course. There is an extreme and controversial view noted in some articles and books stating that higher education, the way we know it now, will disappear in the future and will be replaced with free online learning. Kevin Carney (2015) claims in his book “The End of College” that “The University of Everywhere” (p.14) will be the future and the resources needed to attend college will be free and plenty.   The reminder of this paper will look into the challenges and rewards in the process of switching from traditional in-class teaching to online course offerings in economics and finance areas in the Business Program. I teach mostly economics: microeconomics, macroeconomics, and economics  at the undergraduate level and in the MBA Program at SHU. I have eleven years of teaching experience in the discipline. Until recently, all of my courses have been in-class traditional teaching. Lately, I have begun teaching some of the same courses online.


Monitoring and Accelerating Structural Change via Exports: A Capability Based Approach for Turkey

Dr. Hayrettin Kaplan, Marmara University, Istanbul



Development is shifting resources from low productive activities to high productive ones. So development should be understood as a dynamic endless process. The process should be responsive to the development of the capabilities that a country has. In this regard we try to determine the activities that a developing country should focus when the already developed capabilities are taken into account. We monitor the development of export performance and the structural change Turkey has experienced between 1995 and 2013. We evaluated the existing industry structure and determined the potential sectors that are more productive and suitable with the capability stock of Turkey. These sectors are proposed to be the potential accelerators of the ongoing structural change.   Development is a process of structural change towards sectors which have higher productivity. Since sectors are differentiated among their productive capacity and demand elasticity, heading towards more efficient sectors increases the overall productivity in the economy (Prebisch, 1950; Kuznets, 1966, Paus, 2012). During the process of structural change, developing countries first tends to shift resources from agriculture to industry in the sense of Lewis (1954) by importing foreign technology and capital to increase  productivity. As the country develops, increasing productivity via importing capital and technology tends to reach its limits in conjunction with the diminishing inactive labor force supply in the agriculture sector (Eichengreen et. al. 2011). But as development in the sense of structural change towards more productive sectors is an endless process; countries should focus on and shift resources towards more productive sectors within the industry (Hausmann, Hwang ve Rodrik, 2005; McMillan ve Rodrik, 2011; Rodrik, 2011).  There raises two issues: (i) which sectors would increase the productivity of the country most, (ii) has the country own enough capabilities to have production in those sectors efficiently? In other words, to continue the structural transformation process, a country should shift its resources towards more productive sectors, of which these can be efficiently produced by the country. While the first issue is about the relative position of sectors in terms of productivity, the second issue is about the country’s efficient production capability. These two issues are to be discussed for Turkey via the Product Space literature, in the context of capability development.  Hausmann and Rodrik (2003) points out that although detecting the sectors which have potential to gain comparative advantage in a country is a difficult process; state can make a better assessment than firms. Lin (2010, 2013) emphasizes detection of the sectors as a responsibility for the state and suggests a selection method (Lin and Treichel, 2011). After the selection of sectors, state should implement sector specific policies because required structural transformation cannot be achieved via Washington Consensus policies (McMillan and Rodrik, 2011; Lin, 2013). As Gomory and Baumol (2000; 5) points; there is no single economic path which yields to the best interest of the country. The economic outcome would differ according to the existing capabilities and the choice of the country’s economic administration. In other words, the transformation of the output composition of the country, would differ according to the choice made through the sectors which have the potential of efficient production.  The literature about industrialization policy deals with the issue of how much to deviate from the current comparative advantage. The debate between Justin Lin and Ha Joon Chang, sheds light on the different views about industrial policy implementation in developing countries. The two distinct views have a common on South Korea’s achievements on the grounds of sector specific industrialization policies. Lin and Chang (2009; 496) emphasizes that, South Korea moved “along the ‘ladder’ of international division of labour has often been carried out in small, if rapid, steps”. Thus, to “take small and rapid steps”, a country should decide how much to deviate from its current Revealed Comparative Advantage (RCA).  Capability based approach focuses more on learning process, policy coordination issues and priorities qualitative side, and in that respect differentiates from growth discussions in which quantitative side is considered mostly (Ju 2009;26, Paus, 2012;116). Capabilities approach indicates that growth could only be sustained if the required capabilities are developed (Paus, 2014; 24). Hausmann and Hidalgo (2010) defines capabilities as the non-tradable productive inputs which are combined to make a product, which means capabilities cannot be imported (like technology or capital in the early phases of development), but has to be developed within the country (Hidalgo and Hausmann, 2009).  Hidalgo and Hausmann (2009) puts Lego metaphor to clarify the capabilities approach. Each Lego piece represents a capability, and as the amount of pieces increase, the probability of producing different products increase. Some pieces are rare and unique, and if those capabilities are developed, more unique products can be produced efficiently. The authors show that the capabilities of the country, which can be tracked by the ex post revealed comparative advantages in terms of Balassa (1965), affects the sectors that would be produced with revealed comparative advantage (RCA) in the future. If a country gains a comparative advantage in a sector, new firms enter the sector and creates intra – industry externality; and the capabilities gained in this sector effects other sectors and creates inter-industry externalities, hereby increase the economy’s overall productivity (Hausmann and Klinger, 2006). Thus, avoiding the middle income trap necessitates a transformation of the structure of the economy towards higher productive sectors. Felipe et al (2014) calculated an index of opportunities “to rank countries on the basis of accumulated capabilities”. This index of opportunities signals the potential of a country to achieve structural transformation. China, India, Poland, Thailand and Mexico are ranked in the first five, out of 96 countries; and Turkey is listed as the 15th country in index of opportunities ranking. In this paper, sectors’ contribution to structural transformation is assumed to be heterogeneous, which necessitates a sectoral prioritization approach. In other words, as Felipe et al (2014) reveals that Turkey’s index of opportunities signals a potential for structural transformation; this paper investigates, which sectors Turkey has to be focused in order to realize its opportunities. The indicators used in this paper are similar to those in Felipe et al (2014) which are developed by various papers of Hausmann, Klinger, Rodrik and Hidalgo. One of the main contribution of the paper is using these indicators for selecting or prioritizing sectors according to various qualitative aspects for a country.  Within this framework, the paper seeks to identify the products which would accelerate the transformation of Turkish economy, and follows the below mentioned steps:  Monitoring the structural change and introducing the country’s existing industry structure,  Estimating already developed capabilities by the existing industry structure,  Determining potential new sectors suitable with the already developed capabilities.  In the first part of the paper, Turkey’s structural change is monitored by using technological classification of exports, productivity, diversification and ubiquity of the export basket. In the second part, the already developed capabilities derived from export performance by sectors are presented via Density measure of sectors, which is calculated by the proximity matrix. The Density measure of the sectors are utilized to determine which sectors have the potential to be produced efficiently, when the country’s current capabilities are given. The proximity matrix also allows us to calculate the Path of the sectors which implies the scope of the sectors further diversification. In the third part of the paper, sectors are selected according to benchmark specifications determined for the indicators. 20 sectors out of 255 are selected and suggested to be evaluated as prior sectors to accelerate the structural change. The main features of the selected sectors are presented and compared to the overall export basket in the last part.  The aim of this paper is limited to monitor the structural change that Turkey has experienced from 1995 to 2013 and detect sectors that will contribute most to the acceleration of structural change when the already developed capabilities are given. The discussion about which policies should be implemented for sector specific industrialization policy is outside the framework of the paper. The optimum policy choice, whether orthodox or heterodox should be discussed on the grounds of specific characteristics and needs of the sectors selected.  In the first part of the paper the structural change that Turkey has experienced through 1995 to 2013 is examined, using diversification, productivity and the ubiquity of the export basket. To evaluate the transformation, sectors are classified into 5 categories according to technological content, following Hufbauer ve Chilas (1974), Yılmaz (2003) and Özçelik and Erlat (2013): Resource Intensive Goods (RIG), Labor Intensive Goods (LIG), Capital Intensive Goods (CIG), Easy to Imitate Research Intensive Goods (EIRIG) and Difficult to Imitate Research Intensive Goods (DIRIG). Export data is obtained from UNCTAD for 1995-2013, 127 countries and 255 sectors classified as SITC Rev3. GDP per capita (PPP, constant 2011, $) is taken from World Bank database.


An Iterated Variable Neighborhood Search Algorithm for a Single-Machine Scheduling Problem with Periodic Maintenance and Sequence-Dependent Setup Times

Dr. Chun-Lung Chen, Takming University of Science and Technology, Taiwan (R.O.C.)



We consider the scheduling problems in a single machine with periodic maintenance and sequence-dependent setup times.  The objective is to minimize the total weighted tardiness of the problem.  The problem considered in the paper is a NP-hard in a strong sense.  It requires much computation time to find the optimal solution; therefore, heuristics are an acceptable practice for finding good solutions.  In this paper, an iterated variable neighborhood search algorithm is proposed to solve the problems.  In order to evaluate the performance of the proposed algorithm, several algorithms are examined on a set of 320 instances.  The results show the proposed algorithm performs effective.  In this research, an iterated variable neighborhood search algorithm is proposed to solve the problem of single machine scheduling with periodic maintenance and sequence-dependent setup times. The objective is to minimize the total weighted tardiness of the problem. For convenience, we refer to the proposed algorithm as IVNS.  The single machine scheduling problem does not necessarily involve a single machine; issues in a complicated machine environment, such as a single bottleneck (Gagne et al., 2002; Liao & Juan, 2007) or other complex scheduling issues, can also be fully reduced to single machine scheduling; for instance, a group of machines may be treated as a single machine (Al-Turki et al., 2001; Ying et al., 2009). In order to simplify the scheduling problems, the researchers in the past assumed that all the machines were available all the time in their studies, but it is not the case in real situation.  This unavailability is due to certain causes that result in the machine halt.  For example, routine maintenance or repair would limit the availability of the machine. In addition, companies nowadays emphasize the need for problem prevention and maintenance, so the machine are usually scheduled for periodical maintenance to make sure that the machines won’t fail and thus result in greater loss of production capacity. Therefore, it is necessary to consider the machine availability in the scheduling problems.  Currently, some researches also include machine availability into the scheduling problem consideration.  For example, Jabbarizadeh, Zandieh, and Talebi (2009) included machine availability in flexible flow lines scheduling problems, and proposed dispatch rules, johnson rule, genetic algorithm and simulated annealing to minimize the makespan. Pacheco, Ángel-Bello, and Álvarez (2013) proposed a multi-start tabu search algorithm to solve the same considered problem.  In this research, an iterated variable neighborhood search (IVNS) algorithm is proposed to solve the considered problem with the aim of minimizing the total weighted tardiness.  The proposed IVNS algorithm can be regarded as a variant of the VNS and can be classified as a neighborhood-based local search algorithm.  Mladenović and Hansen (1997) first developed the VNS heuristic and VNS is a relatively new neighborhood-based local search.  This heuristic searches the solution space using a set of predefined neighborhood structures.  The search escapes from local optima solutions by systematically changing the neighborhood structures.  In recent years also several production scheduling problems have been efficiently solved with VNS approaches.  The VNS algorithms or the variants of the VNS for solving single machine scheduling algorithms include Gupta and Smith (2006) use a VNS algorithm for single machine total tardiness scheduling with sequence-dependent setups.  Lin and Ying (2008) propose a hybrid Tabu-VNS metaheuristic approach for single-machine tardiness problems with sequence-dependent setup times.  Kirlik & Oguz (2012) also consider the same single scheduling problem and a VNS is presented to solve the problem.  Liao and Cheng (2007) propose a VNS for minimizing single machine weighted earliness and tardiness with common due date.  Tseng et al. (2009) employ a VNS for large size instances of the single machine total tardiness problem with controllable processing times.   The problems considered in this study are NP-hard problems, and developing heuristics to solve the problems is acceptable. Since VNS is a trajectory-based heuristic, the proposed IVNS needs to start from a given solution.  The proposed algorithm first develops a series of neighborhood structures, which is based on random moves of a subsequence, and applies greedy local search algorithm to explore the solution space of the proposed neighborhood structures.  A shaking operator is also constructed, which is then used to perturb the incumbent solution and attempt to escape the local optima obtained by the search.  The randomly generated problem instances are used to assess the performance of the proposed algorithms.  The rest of this paper is organized as follows.  Section 2 describes the problem considered in the study, section 3 describes the proposed approach, section 4 presents the computational results, and section 5 concludes the paper.  There is a time interval of length T between completion times of two consecutive maintenance activities. In general, T is not enough to process all the jobs and more than one maintenance activity should be programmed. The maintenance activity will be considered as a job with index 0, it consumes a fixed amount of time p0. Every time a maintenance activity is completed, there is a setup time S0j of the machine for each job j. We make the following assumptions: All the jobs are available for processing at time zero, machine breakdown does not occur, the machine can process only one job at a time, and jobs cannot be preempted.  The objective of this algorithm is to find a schedule that minimizes the total weighted tardiness.  An IVNS algorithm is developed to solve the candidate problems.  The workflow of the proposed IVNS algorithm is shown as Figure 1.  The figure shows that a heuristic rule is applied to generate a feasible initial solution, and VNS is applied to improve the initial solution obtained.  To enhance the effectiveness and efficiency of the VNS heuristic, several new neighborhood structures are explored.  Finally, a mechanism is used to develop a shaking operator, used to perturb an incumbent solution within a neighborhood structure k, or to attempt to escape from a local optima solution when all neighborhood structures have been explored. 


Analyzing Financial Time Series Using Monte Carlo Bayesian Approach

Dr. Jae J. Lee, State University of New York, New Paltz, NY




This paper explains how to analyze financial time series data using Bayesian inference with Monte Carlo Markov Chain (MCMC) algorithm. Many business and economic time series are parsimoniously modeled by Autoregressive Integrated Moving Average (ARIMA) model. Bayesian inference provides a systematic way to incorporate researcher’s prior knowledge in the analysis of data and provides a sequential way to update analysis given new data. Rather than repeated sampling paradigm, its paradigm is to treat the unknown entities as a random vector and to derive a posterior probability density for the random vector. Summary of the random vector is usually based on random draws of the posterior probability density. MCMC algorithm helps generate random draws of the posterior probability density that doesn’t have an analytical form from which random draws are easily obtained. In this paper, several ARIMA models are modeled using simulated data. Prior density and posterior density of parameters of each ARIMA model are obtained by Bayesian inference.  A random walk Metropolis and Hastings algorithm is used to generate random draws of posterior density. Random draws are used to summarize characteristics of parameters of ARIMA models. Some convergence diagnostics of MCMC approach are discussed.  A business and economics time series is a stationary if the joint distribution of a time series is not affected by a change of time origin. If a time series shows a stationary pattern, autoregressive (AR), moving average (MA) or mixed (ARMA) model is very useful to model a stochastic structure that generates the series. However, many business and economics time series do not show the stationary pattern. A particular nonstationary pattern is a homogeneous nonstationary that is homogeneous except in level and/or slope. Such behavior can be modeled using autoregressive integrated moving average (ARIMA). ARIMA is a stochastic model for which the exponentially weighted moving average forecast yields minimum mean square error (Box et al., 1994). Homogeneous nonstationary is removed after allowing for some differences of time series data. Bayesian inference is conditional on a prior knowledge about unknown entities and an observed data. It provides a systematic way to incorporate researcher’s prior knowledge in the analysis of data. Once new data are observed, it provides a sequential way to update prior beliefs and to add additional information. It also naturally deals with conditioning and marginalizing any nuisance variables. Augmenting nuisance variables speed up computations.  In addition, Bayesian inference accounts for both parameter uncertainty and model uncertainty using Bayes factor for each model entertained. For time series context, it provides a predictive distribution of data that is required for forecasting. Main framework of Bayesian inference is to treat the unknown entities as a random vector of variables and to derive a posterior probability density for the random vector given any source of prior information and an observed data. Inferential summaries of the unknown entities are usually based on random drawings of a posterior probability density.  Often random draws directly from a posterior density are not feasible since the posterior probability density is not one for which a set of random draws is generated directly. Many algorithms have been developed to generate random drawings from the posterior density. Acceptance and Rejection algorithm, Importance Sampling algorithm, and Sampling Importance Resampling are some examples of non-iterative Monte Carlo algorithms. Markov Chain algorithms are examples of iterative Monte Carlo algorithms. Popular iterative Markov Chain algorithms are Gibbs sampler and Metropolis-Hastings (MH) algorithm. In extremely high dimensional problems, non-iterative methods may be difficult to find a required density that is close to posterior density. Iterative Markov Chain algorithms are more flexible and also feasible for high dimensional problems. Main framework of an iterative Markov Chain is to set up a Markov Chain whose stationary distribution converges to a posterior probability density. After discarding some burn-in portion of chains, rests are used for summaries of the posterior probability density. Gibbs sampler uses full conditional densities of unknown variables and MH uses a proposal density from which random draws are generated. Each generated random draw is accepted with a probability based on where the random draw is located in posterior probability density. Gibbs sampler is a special type of MH algorithm where a set of draws from full conditional densities is accepted with 100 percent.  In the 2nd section of paper, several ARIMA models are discussed with conditional likelihood functions. It is noted that AR, MA and ARMA model are special types of ARIMA model. In the 3rd section, implementation of Bayesian methods for ARIMA is discussed. In the 4th section, implementation of a random walk MH for ARIMA is discussed. In the last section, a set of simulated time series data are used to show how to implement Bayesian inference and a random walk MH algorithm for each ARIMA model. Also, some issues of convergence diagnostics for MCMC algorithm are discussed. 


Development of Marketing Capabilities Along the Life Cycle of the Firm

Katharina Buttenberg, University of Latvia



Marketing capabilities have gained a lot of interest in resource-based theory literature in the last decade. Customer- and Brand-oriented marketing capabilities have been identified as one of the key capabilities for business performance. Therefore, these capabilities have to be acquired and developed at a very early stage in the firm. The purpose of this paper is to identify the specific challenges firms have to face in the development of capabilities during their life cycle, specifically marketing capabilities. The approach is a literature review. For the analysis, the author draws on the literature of the resource-based theory for marketing capabilities and life cycle theory. Key findings are that young firms have to specifically establish marketing capabilities to be successful in terms of business performance and later on they have to further develop these capabilities. Since the development of capabilities in young firms very often is an unstructured process, practical implications of this paper prompt that a structured process for the development of marketing capabilities should be established to ensure successful future development. This is a theoretical paper and includes the findings of the literature analysis of the resource-based theory on marketing capabilities in connection to business performance and the life cycle theory on capability-development, as well as findings and suggestions for future steps in empirical research. The resource-based theory (RBT) is based on the theoretical approach that a firm can gain competitive advantage by acquiring a unique set of resources (Barney, 1991). Amit and Schoemaker evolved the concept of resources by introducing capabilities, which are firm-specific processes, developed over time. (Amit & Schoemaker, 1993, p. 35) To develop these capabilities and benefit from their full potential, firms and their managers must carefully pick, manage, monitor and sometimes shed them. (Sirmon & Hitt, 2003, pp. 344–348) During the company life cycle, different capabilities need to be developed to create sustainable competitive advantage (Helfat & Peteraf, 2003, p. 1000). Especially in the first ten years, where the capabilities need to be grouped and assigned and the objectives need to be set, the acquisition and development of the main capabilities is crucial (e.g. Miller & Friesen, 1984, pp. 1162–1163). These capabilities also include marketing capabilities, which have grown in interest in the resource-based theory literature (Kozlenkova, Samaha, & Palmatier, 2014, p. 1). In the last century, the role of marketing has changed from a transaction-based formative discipline to a brand-based approach (Vargo & Lusch, 2004, pp. 2–8). This new role is including the relationship between the inside-out (brand-oriented) view and the outside-in (consumer-oriented) view. Since marketing capabilities hold a central position in the firm and are central to the performance of the firm, they are important to develop at the early stages of the firm, but also in later development (e.g. Kozlenkova et al., 2014, pp. 2–4). Therefore, a closer investigation of the development of marketing capabilities during the life cycle of the firm is a topic to be investigated.  As mentioned above, there has been a paradigm-shift in marketing. The previous sole focus on the customer has been shifted to a focus on the brand as the center of marketing. (Urde, Baumgarth, & Merrilees, 2013, p. 14) However, customer-orientation is crucial for the development of a profitable enterprise. (Deshpandé, Farley, & Webster Jr, Frederick E., 1993, p. 27) So firms are facing the challenge of incorporating and integrating the customer-oriented and the brand-oriented view to provide strong sustainable economic value, which is typically half of the market capitalization of a firm (Kotler, 2009, p. 446). Therefore, already young organizations need to develop marketing capabilities enabling them to support both views. "The ultimate goal of the marketing function within a firm can be defined as increasing the value of the market-based assets of the firm." (Shervani, 2010, p. 1) To fully benefit from brands the sources and effects of market-based assets as well as their change over time is important to understand (Kotler, 2009, p. 446). There are two strategic views in management, which need to be attended to, especially in marketing and branding, where the different orientations are especially apparent. The brand-oriented as well as the market-oriented approach are based on different underlying cultural beliefs and norms, which lead to different behaviors and consequently different measurements. (Baumgarth, Merrilees, & Urde, 2011, pp. 8–12) Whereas the inside-out view mainly emphases the adherence to firm-based marketing and branding-guidelines, the outside-in view is focusing on the relationship with the customer. Consequently, the strategic focus in the inside-out view is on the identification and pursuit of growth-opportunities for the firm and in the outside-in view on the generated financial value. (Keller, 2008, pp. 84–87) To create powerful brands that provide sustained competitive advantages, this “gap” in views needs to be bridged (Day, 2011, pp. 187 pp). To be successful, the organization has to develop capabilities to support both customer- and brand-orientation. Capabilities are very unique and individually developed by each firm. They range from operational capabilities that are very skill-focused to highly strategic levels. Marketing-capabilities are widely spread across all levels and – due to the nature of marketing, they are highly integrated in the firm and closely connected with other capabilities (e.g. Vargo & Lusch, 2004, p. 3) Therefore, most studies combine measuring the impact of marketing capabilities with other strategic capabilities such as management-related topics, technology and IT or innovation. Due to the same fact that marketing is highly complex and integrated, marketing-capabilities can be classified in various ways. Looking at the single measuring items of the surveys for the studies, it is clear that the grouping of the single items varies from author to author depending on the topic to be analyzed. Many researches classify in customer-oriented and branding-oriented capabilities. However, there are certain overlaps in the concepts.   Customer-oriented capabilities are based on the understanding that customers are valuable contributors to the organization. These capabilities focus on the approach to the relationship of the organization with the customer and support the integration of the customer, their needs and opinions in the strategic and functional development of the organization and its’ products or services. The measurement of the relationship with the customer is also important in the customer-oriented view (e.g. Hult, Ketchen, Jr., & Slater, 2005, p. 1180).


How Idol Admiration Affects Audience's Willingness to Watch Broadcasts of Japanese Professional Baseball Games: A Case Study of Taiwanese Baseball Players in Japan

Dr. Yu-Chih Lo, National Chin-Yi University of Technology, Taiwan

Dr. Tu-Kuang Ho, Taiwan Hospitality & Tourism University, Taiwan



Professional baseball has been very popular in Taiwan. As more Taiwanese baseball players are scouted and signed by overseas professional baseball organizations, these overseas professional baseball leagues with Taiwanese players have attracted more audience in Taiwan. The study aimed at exploring Taiwanese baseball audience’s willingness toward broadcasted Japanese professional baseball games (NPB), subjective norms, perceived behavior control, and idol admiration, these factors’ effects on behavior intent. For the study, the researchers utilized purposive sampling and administered 310 questionnaires in total. After filtering 10 invalid questionnaires, the study collected 300 valid questionnaires, yielding a 96.8 percent survey response rate. In terms of data analysis, the researchers first processed demographic variables with descriptive statistics in SPSS 20.0, followed by multivariate analysis and model rationality validation, which were further analyzed as measurement model and structure model, in AMOS 20.0. The results found that firstly, audience’s willingness toward broadcasted NPB games, perceived behavior control, and idol admiration, have significant influence on behavior intent. Whereas, for the audience’s subjective norms toward broadcasted NPB games, the study found no significant impact on behavior intent. In conclusion, based on the findings, the researchers then made recommendations for future studies on idol admiration and spectator behavior in sports.  In recent years, sports activities have increasingly become professionalized. Famous baseball players from Taiwan have been recognized and valued by baseball teams in Japan. At present, in this regular season of Japanese professional baseball (Nippon Professional Baseball, NPB), a total of seven Taiwanese baseball players are on the roster for various baseball teams. Broadcasting companies in Taiwan have also purchased broadcast rights from these teams to broadcast the NPB games. The professional baseball league in Taiwan and various professional baseball teams have to consider how they can use mass media (such as television and Internet broadcasting) to enhance idol admiration, increase their profit and attract more sports fans to watch the games of this professional sport. Against such a background, this paper studied the audiences of professional baseball games in both Taiwan and Japan and applied Ajzen's (1985; 1991) Theory of Planned Behavior to explore how sports fans’ idolization of popular baseball players influences their intention to watch professional baseball games. It is hoped that the findings of this study can be used by governmental agencies, baseball leagues and baseball teams as reference in future decision-making processes to help them draft policies on professional sports and in choosing marketing strategies.  The Theory of Planned Behavior (TPB) (Ajzen, 1985; 1991) was developed based on the Theory of Reasoned Action (TRA) (Fishbein & Ajzen, 1975). TRA suggests that a person’s behavior is influenced by rational considerations. Building on the expectancy-value model proposed previously by the researchers, TRA points out that a person’s behavior is determined by his/her intention to perform a certain behavior. TRA can reflect the person’s intention and expectancy to perform a specific behavior, and it can also be used to predict if the person will in fact perform the behavior. Fishbein and Ajzen (1975) also point out two factors for the formation of behavioral intention: an individual’s attitude towards a certain behavior and the relevant subjective norms formed under social pressure. A person’s like or dislike of the expected result of a certain behavior is understood as the person’s attitude towards the subject behavior. On the other hand, social approval or disapproval of an individual’s behavior is considered the subjective norm; the higher the subjective norm, the higher the behavioral intention. Conversely, the lower the subjective norm, the lower the behavioral intention!  TRA assumes that human beings are rational and that their behaviors are controlled by individual consciousness. This concept can be applied widely in various studies on behavior (e.g., Mishra et al., 2014; Schwarzer, 2014; Comello, 2015; Hamari & Koivisto, 2015; Lee et al., 2016; Paul et al., 2016). However, in real life, many obstacles may prevent a person from engaging in an intended behavior; for instance, the person may lack the required resources or opportunity. (They may include temporal, monetary or skill-related obstacles.) These required conditions suggest that a person’s intention may not completely control his/her behavior. Hence, Ajzen (1985) expanded the original theory of TRA and proposed TPB which can better predict and explain real behavior. TPB differs from TRA in that it increases its predictability of actual behavior. In addition to attitude and subjective norm, TPB added the element of perceived behavioral control. Perceived behavioral control reflects an individual’s perception about whether or not he/she has enough control or whether or not the person can acquire the necessary resources to carry out certain tasks. The person’s control can also reflect the person’s perceived self-efficacy when he/she performs the said behavior, and it will directly influence the formation of behavioral intention and actual behavior (Ajzen, 2002). In contrast, when a person believes that he/she lacks the requisite ability, the resources or the opportunity to engage in a certain behavior, the intention and the likelihood of the person performing the behavior will be reduced. Many scholars also support the idea that a person’s behavior is not completely controlled by the person’s conscious plans, so TPB is believed to be better able to predict a person’s behavior (Ajzen & Madden, 1986; Sheeran & Orbell, 1999; Sheeran et al., 2002; Hagger et al., 2002; Eves et al., 2003). In summary, TPB is known to better predict a person’s intention and behavior, and studies have shown that the concept of TPB can be applied to exploring various behavioral developments, such as sports behavior (Mistry et al., 2015; Chan et al., 2015; Nigg & Durand, 2016), travel behavior (Moghtaderi et al., 2015; Tang et al., 2015; Lo et al., 2016) and healthful eating behavior (Brouwer & Mosack, 2015; Chan et al., 2016; Jun & Arendt, 2016). Based on TPB, this study proposes the following hypotheses:


Entrepreneurship, Innovation and Organic Growth within Vertical Software Firms

James Simak, Jacksonville University, Florida

Steven T. Kelley, Jacksonville University, Florida

Dr. Vikas Agrawal, Jacksonville University, Florida



Growth in competitive industries is often pursued through mergers, acquisitions and consolidations, frequently with less than desirable, lasting results as the outcome. However, a larger balance sheet or increased revenues are initially certain, providing organizations with confidence that the growth objectives will be met.  In contrast, organic growth is abstract and uncertain, historically pursued by development of strategic competitive advantage through superior marketing efforts refining or redefining product, place, price and promotion to gain market share.  As an alternative, a growing number of theories and models have been developed around the importance of risk taking through entrepreneurship and innovation as the principal method of achieving long term, sustainable growth.  This study investigates factors identified by senior managers as contributory to entrepreneurship and innovation within diversified, established vertical software firms and tests hypotheses related to such factors for growth and success of the firm.  Further, this study attempts to determine if sustained organic growth of the firm must include innovation and entrepreneurship as fundamental competencies.  Interviews were conducted with senior leaders responsible for overall business unit results including sales and marketing, operations, product development and competitive strategy in niche software vertical markets.  Confirmatory empirical data and research findings are presented that test hypotheses of underlying relationships of key factors as drivers or barriers to innovation and related organic growth within the firm.  Drucker (1954) declared that business has only two basic functions, marketing and innovation.  This perspective of innovation as a critical business function has endured more than sixty years and suggests entrepreneurship and innovation provide competitive advantage for growing and sustaining business (Crossan & Apaydin, 2010).   Porter (1996) advocates further that organizations must innovate to be competitive over the long-term. Technology-oriented firms including vertical software suppliers compete in rapidly changing environments with competitive forces requiring product enhancement and new product development to maintain market share and sustain growth.  The problem explored by this research centers on organic growth of the firm achieved organically through entrepreneurship and innovation, forgoing reliance on mergers and acquisitions for development of new products, territories and clients.  Latent variables of innovation and entrepreneurship have been the subject of extensive research covering a wide range of disciplines including economic, psychological, social, cultural, and organizational perspectives.  This explorative research will be limited to considering innovation and entrepreneurship related to expected economic benefits within established and ongoing business enterprises, specifically within established software firms. Schumpeter defined innovation simply as “doing things differently” and stressed the importance of novelty in terms of products and processes within the firm (Tzeng, 2009).  Burgelman (1983) defined “internal corporate venturing” to create new businesses within an established firm.  Damanpour (1987) categorized innovation as being one of a radical, incremental, product, process, administrative or technical endeavor. This study follows Damanpour’s guidance on innovation as a collection of processes to generate and adopt new ideas as a distinct phenomenon facilitated by organizational conditions (Damanpour, 1991).  We add to this definition the necessity that such activities are undertaken for purpose of furthering economic benefits to the firm.  This study assesses the model depicted in Figure 1, whereby antecedents of market and entrepreneurial orientation enable the firm to successfully execute the mediating processes of innovation discovery (ID) and innovation exploitation (IE) to achieve desired performance as measured by growth, profitability, market share and valuation.  These components are moderated by descriptive variables such as firm size, structure and age in addition to the availability of slack resources and deployment of invested capital.  However, due to limitations in survey implementation and data collection, analysis of such moderator variables is left for future studies. Rather, focus is placed on factors found in the antecedents, specifically entrepreneurial orientation and its relationship with innovative capabilities (ID and IE) as will be discussed further herein.  Understanding the needs of customers requires strong market-sensing and customer-relating capabilities (Day, 1994).  Kohli and Jaworski (1990) defined market orientation (MO) from a behavioral perspective as the firm-wide generation of market intelligence that pertains to customer needs, dissemination of intelligence, and organization-wide responsiveness. Narver and Slaver (1990) expand the market orientation construct as encompassing customer-orientation, competitor orientation and inter-functional coordination.  Incorporating market orientation as an antecedent to innovation activities provides a mechanism for organizations to adapt in dynamic environments and sustain a competitive advantage (Hurley & Hult, 1998; Han, Kim, & Srivastava, 1998).  Jaworski and Kohli further suggest that market orientation is an antecedent to innovation, and substantial research supports this foundational relationship between market orientation and innovation (Hurley & Hult, 1998; Narver & Slater, 1990).  This compliments our conceptualization of entrepreneurial orientation (EO) and is supported by findings of Matsuno et al. (2002) that entrepreneurship within the firm results in a greater level of market orientation, and suggests firms strive to be both entrepreneurial and market oriented in the pursuit of growth.


Effect of Deferred Tax Reporting – Case of Publicly Traded Companies in Czech Republic

Dr. Hana Bohusova, Mendel University in Brno, Czech Republic

Dr. Patrik Svoboda, Mendel University in Brno, Czech Republic



The reporting of deferred tax is an instrument for distributable profit or loss regulation in a form of an accrual or a deferral. The research aimed at deferred tax in European companies is very limited. The majority of studies carried out in this issue concerns firms incorporated in the USA and covers period beginning in 1994. The contribution to the current research in this issue is that the research is concerned to non US companies reporting according to IFRS. The structure of deferred tax category of publicly traded joint-stock companies in the Czech Republic and its impact on financial analysis ratios are subjects of the research. According to information of Prague Stock Exchange (2016), there were 24 publicly traded companies trading their stocks on Prague Stock Exchange in researched period in total. The financial institutions (5) were excluded from the research. Additional 5 companies were excluded due to incompletely information provided. The research is built on results of the authors´ previous research. The processed data were obtained from annual report of the companies.  The materiality of deferred tax category within our sample was examined and details on the most significant components of temporary differences were presented. The relation between deferred tax expense and the total corporate income tax expense in the period and the relation between deferred tax changes and EBIT and EAT were tested.  According to CreditRiskMonitor (2016), there are 73.458 parent entities traded on regulated capital markets over the world. They are covering $49 trillion of revenue worldwide which represents 70% of world GDP. Given the importance of financial information provided to the external users (mainly to investors and providers of financial resources), it is necessary to present such an information in a fair view. To meet these requirements, the reporting in accordance with the generally accepted financial reporting system - US GAAP or IFRS is necessary. The Regulation (EC) No. 1606/2002 in the EU requires publicly traded companies governed by the law of a Member State under certain conditions prepare their consolidated accounts in conformity with International Financial Reporting Standards for each financial year starting on or after 1 January 2005.  These companies represent less than 1% of the total number of companies operating on the Internal Market. Despite this fact, they represent 33.5% of jobs in business entities and according to EC (2013) contribute to the indicator Value Added at Factor Costs. It is quite obvious, that listed companies represent significant share of corporate tax bases contributing by corporate income tax into the state budget. On the other hand, the publicly traded companies represent significant possibility for investment. The true and fair information on financial position and performance are demanded by both - current or potential investors. The information is provided by financial statements therefore it is necessary to take into account also the relationship between financial reporting and income tax rules which differ in individual countries. It means that the gross profit or loss reported to users of financial statements could differ from corporate tax base due to different rules in individual countries. Trying to measure the relation between corporate taxation rules and accounting rules, it is necessary to investigate their objectives. While the aim of accounting and financial reporting consequently is concentrated on fair reporting to users (i.e., financial results must not be overestimated), the aim of taxation is to collect the taxes (i.e. to ensure the revenue for the state budget). A number of studies concerning the relationship between taxation and financial reporting can be found (e.g. Walton, 1992, Nobes, Parker, 2010, Doupnik, Salter, 1995, Hoogendoorn, 1996, Lamb, Nobes, Roberts, 1998, Blake, Fortes, Amat, Akerfeldt, 1997, Aisbitt, 2002 – Nordic countries). The relationship between taxation and financial reporting in the conditions of the Czech Republic was measured by Nerudová (2009).  Two types of differences between profits or loss reported in financial statements and tax base can be identified – temporary and permanent. Permanent differences´ effect (in the form of reduction or increase of taxable income comparing with reported income) is definitive. Temporary differences give rise to an accounting category called deferred tax.  The temporary differences are connected with accounting category deferred tax. In accordance with EC Regulation No. 1606/2002, publicly traded companies are obliged to report deferred taxes (deferred tax asset or liability). Probability and time of realization of deferred tax assets (DTA) and deferred tax liabilities (DTL) are estimated for the most accurate deferred tax reporting.  Reporting in accordance with deferred taxes model is a subject of challenge on several areas of research. The issue of deferred tax has been researched from various aspects. The majority of studies deal with the relationship of tax and accounting rules for income measurement (Freedman, 2004, Freedman, MacDonald, 2007).  The topic of deferred tax is a subject of IAS 12 in the IFRS and ASC 740 in the US GAAP. According to IAS 12 temporary differences are differences between the carrying amount of an asset or liability in the statement of financial position and its tax base. The tax base of an asset or a liability is the amount attributed to that asset or liability for tax purposes. The reporting of deferred tax represents an instrument for distributable profit or loss regulation in a form of an accrual or a deferral, when in a period of lower payable income tax, the company postpones the part of the reported profit in a form of deferred tax liability. In a period of higher payable income tax, the company increases the reported profit by creation of deferred tax asset or by use of deferred tax liability.  Reporting in accordance with deferred taxes model is a subject to challenge on several areas of research. Probability and time of realization of DTA and DTL are estimated for the most accurate deferred tax reporting.  According to Vučković -Milutinović. Lukić (2013), various approaches to the level of deferred taxes recognition are used in individual reporting systems: from ignoring deferred taxes through their partial recognition to full expression. Each of these approaches has a different effect on the financial statements and consequently provides a different information base for decision making of many users of these statements.


Mitigating Risk from Railcar Bearing Failures: A Predictive Model for Identifying Failures

Dr. Vikas Agrawal, Jacksonville University, FL

Kimberly Bynum, Jacksonville University, FL

John Jinkner, Jacksonville University, FL

Frank Lombardo, Jacksonville University, FL



Previous research on accident rates for trains has shown that, when trains are traveling above 25 miles per hour, the main cause of accidents is equipment failure that, to a high degree, includes bearing failure. Using data collected from acoustic wayside defect detectors along railroad tracks, statistical analyses were conducted to build a model that will predict the percent probability of bearing failures. This information may be useful to better detect defective bearings before a failure, and to create maintenance schedules using predicted failure rates to maximize railroad safety and minimize maintenance costs.  Railroad companies use wayside detectors and automated analyzers to identify railcars and associated equipment that exhibit operating parameters which warrant repair or replacement. Three United States railroads (CSX, Union Pacific and Norfolk Southern) have partnered to develop the Joint Wayside Diagnostic System (JWDS). Although each railroad operates its own separate portion of the JWDS system, all data is fed into a single database, and this database is available for information exchange between the railroads. Equipment failure and/or car downtime prove expensive for railroad companies. Real-time condition monitoring and reporting provided by JWDS mitigates downtime and accidents, and therefore costs. The system identifies and prioritizes rail car conditions allowing inspectors to move from finders to fixers by proactively flagging real-time readings rather than waiting until after an equipment failure or derailment occurs. Data mining the JWDS database allows trends and patterns to be discovered early which may reduce equipment down time, and in extreme cases, may even save lives.  This paper explores a dataset from the JWDS database and creates a logistics regression model to predict deteriorating railcar equipment. Twelve months of data were remotely collected from an acoustic wayside defect detector in which railcar types and various bearing noises had been recorded, analyzed and categorized. Using logistics regression in SAS Enterprise Miner software resulted in the development of an empirical model that identifies the specific car types and noise component (type and level) relationships to predict the deterioration of bearings.  In the early days of railroading, overheated journals were a major safety issue. Journal boxes (bearings without rolling elements) contained lubrication, which often overheated resulting in a condition referred to as a “hotbox.” A hotbox could result in a burned-off bearing, which would ultimately lead to a train derailment. Back then, crews at the rear of the train were vigilant in looking for the smoke and smell associated with hotboxes. Modern railroad operations no longer use plain-bearing cars, but instead use the successor rolling element bearings, which can still be prone to occasional overheating. Likewise, hot wheels, often caused by sticking brakes, also remain a safety concern (McGonical, 2006).  In an effort to mitigate equipment failure due to overheated wheel bearings, wayside defect detectors, first developed in 1960’s, were employed by railroads to monitor railcars. The most common type of wayside detectors are hot box and equipment dragging detectors. A hot box detector measures the temperature of journal bearings on railcars in operation as the car travels by the detector. There are more than 6,000 hot box detectors installed along more than 140,000 miles of railroad network. Implementation of this system has allowed for an efficient, and cost effective method of scheduling maintenance, while reducing rail accident rates by more than 20% (AAR, 2015).  Another newer type of wayside defect detector is the acoustic bearing detector. This detector evaluates the sound wave of internal bearings of passing railcars to predict equipment deterioration. These systems can either replace or supplement hot box detectors that measured heat (rather than noise). Evaluating the sound of a rolling bearing has the advantage of allowing railroad operators to detect issues long before actual overheating of equipment occurs. One can get a sense of the importance of this technology when noting the following statistic: Between the years 2001 and 2010, bearing defects caused 3.3% of train derailments for a total 144 total derailments in which 1,157 individual cars were involved (Liu & Barkan, 2012, p. 157).  Rolling element bearings are used in a wide variety of mechanical applications and in a wide range of industries. The proper operation of these mechanical devices in railcar rolling stock depends, to a great extent, on the smooth and quiet running of the bearing components. The smooth operating condition of bearings is critical and a defect in a bearing component, unless detected in time, may cause equipment failure and/ or railcar or train derailments (AAR, 2015). Discontinuities on rotating equipment tend to produce big impact loads on the bearing. As a result, a periodic impact noise is produced when the faulty bearing edge causes certain acoustic harmonic signatures such as flanging, banging, slamming or high pitched tonal noises, which occur in addition to the usual rolling noise, which is more random in character. Bearing surface errors are a serious issue when the safe and efficient operation of the train is considered (Madejski, 2006).  Effective defect detection, as well as regular quality inspection and maintenance, is important for monitoring the condition of bearings. Over the years many methods have been developed to measure heat, vibration, and acoustic responses in defective bearings. These methods include: temperature and vibration measurements, sound intensity and acoustic emission, shock pulse method, spike energy, sound pressure, spectrographic oil analysis, and chip detection (Tandona & Choudhuryb, 1999; Papaelias, Huang, Amini, Vallely, Day, Sharma, Kerkyras & Kerkyras, 2014).


Market Reactions to the PricewaterhouseCoopers Merger

Chiawen Liu, National Taiwan University, Taiwan

Taychang Wang, National Taiwan University, Taiwan

Wan-Ting (Alexandra) Wu, University of Massachusetts Boston, MA



This paper examines the market reactions to the merger of Coopers & Lybrand (CL) and Price Waterhouse (PW) in 1997. The results show that, when the merger plan was announced, there are no significant abnormal returns for CL clients, PW clients, or clients of both accounting firms. Further analyses show that the market reactions to the merger plan are indifferent between firms with varying monitoring demand. Although the monitoring hypothesis is rejected, we find evidence consistent with the insurance hypothesis: financially-distressed clients have more positive abnormal returns around the date of announcement than financially-healthy clients. Such results imply that investors of a financially-distressed client expect more benefits from the merger of its accounting firm which enhances auditors’ insurance role against a corporate failure.  Merger and acquisition has been a corporate strategy to expand market share or improve company performance. Accounting profession is no exception. In 1989, Ernst & Young was formed by the merger of Ernst & Whinney and Arthur Young. In the same year, Deloitte, Haskins & Sells and Touche Ross merged and became Deloitte & Touche. In this merger wave, the Big 8 shrank to the Big 6. On September 18, 1997, Coopers & Lybrand (CL) and Price Waterhouse (PW), at the time the fifth- and the sixth-largest accounting firms in the U.S., announced to establish the world’s accounting giant, with combined annual fees of $11.8 billion in worldwide in 1996 and about 135,000 employees across the globe. The accomplishment of this merger created PricewaterhouseCoopers (PwC) on July 1, 1998 and further reduced the Big 6 accounting firms to the Big 5. The purpose of this paper is to study the market reactions to the announcement of the merger plan of CL and PW. More important, we examine how the market reaction ties to clients’ monitoring and insurance demands. Studies on audit clients’ stock price reactions have focused on the negative events of the accounting firms and more often than not find negative effects on clients’ stock prices. For example, Chaney and Philipich (2002) and Krishnamurthy et al. (2002) investigate the impact of the Andersen’s audit failure in Enron on Andersen’s non-Enron clients. Menon and Williams (1994) and Baber et al. (1995) examine the effect of Laventhol & Horwath bankruptcy on its clients. Franz et al. (1998) study the impact of litigation against audit firms on the firms’ non-litigating clients. In contrast to these studies, our paper examines market reaction to the merger of two Big 6 accounting firms, which is normally considered as a positive event. Since investors react asymmetrically to good news and bad news (McQueen et al. 1996), it is not clear whether we can simply invert prior studies’ findings on negative events of accounting firms for a positive event.  We rely on the monitoring hypothesis and the insurance hypothesis (Wallace 1980) to predict the market responses to the announcement of CL and PW merger plan. Under the monitoring hypothesis, if audit quality increases after the merger, as usually claimed by the merging firms, clients should receive more effective monitoring from auditors. Thus, auditees’ stock prices will respond positively to the merger announcement if stockholders expect future monitoring to be enhanced and raise their valuation of the auditees accordingly. Based on the insurance hypothesis, the merger increases the accounting firm’s funds available to settle litigation under audit failures. Since stock price is the present value of expected future cash flows, more indemnity secured from auditors for an audit failure implies a higher stock price. Both the monitoring hypothesis and the insurance hypothesis suggest that, ceteris paribus, the merger of accounting firms has positive effects on their clients’ stock prices. We further predict that the magnitude of stock reactions is greater for clients with a higher demand for the monitoring or insurance functions from auditors.  The results indicate that, when the merger plan was announced, no significant abnormal returns are observed for CL clients, PW clients, or the clients of both accounting firms as a portfolio. While the difference between the market reaction of the rapidly growing clients and that of the slowly growing clients is not significant, we find that financially distressed clients experience significantly positive abnormal returns around the event date, but the financially-healthy clients do not. The multivariate analyses show that the abnormal return is significantly related to the client’s financial condition, but not related to the sales growth. Overall, the results imply that the benefits of this merger mainly come from the increase in the funds available to settle litigation when an audit failure occurs, rather than from the higher audit quality.  This paper contributes to the extant literature in several ways. First, this paper extends our understanding of the impact of a positive auditor-related event on the auditees. This is in a stark contrast to previous research that focuses on negative events such as bankruptcy of accounting firms (e.g. Baber et al. 1995). Second, this paper shows evidence on clients’ stock reactions to the disclosure of the merger plan and relates the stock reactions to clients’ monitoring and insurance demand. Our results complement prior studies that show the impact of the merger of accounting firms from different angles (e.g. Thavapalan et al. 2002 which examines the levels of auditor concentration before and after the merger).  We develop two hypotheses to predict market reactions to the merger of Pricewaterhouse and Coopers: monitoring hypothesis and insurance hypothesis.  Managers do not always act in the best interests of shareholders, but an effective monitoring mechanism such as auditing is able to curtail managers’ desire to pursue their own benefit at the expense of the stockholder (Jensen and Meckling 1976).

Contact us   *   Copyright Issues   *   Publication Policy   *   About us   *   Code of Publication Ethics

  Copyright © 2017 JAABC. All rights reserved. No information may be duplicated without permission from JAABC.