MAIN Model for Determining Technological Affordances and Credibility in Social Media

Abstract

The primary purpose of this study was to develop a technological affordance scale based on the constructs identified by the MAIN Model. The manuscript presents a 12-item scale for the individual technological affordances and a 2-item scale for perceived credibility of social media platforms. Three social media platforms, Facebook, Twitter and Snapchat were evaluated for perceived credibility and the technological affordances; Modality, Agency, Interactivity and Navigability. A composite scale was developed for platform credibility and platform technological affordance across the three social media platforms. The manuscript concludes that Agency played a moderate role in predicting credibility across all three social media platforms, where the other constructs of the MAIN model were not significant factors in predicting credibility. 

Keywords: main model, technological affordances, digital media, credibility, perceptions

            Digital advertising is a billion-dollar industry that continues to grow due to mobile device and Internet adoption. Social Media is also becoming an increasingly contributing factor to the growth advertising expenditures. A prominent issue facing advertising scholars, marketers and publishers is source credibility in online environments. The overall ambiguous nature of Internet-based communication has left researchers, marketers and consumers with little to measure in terms of source, platform or feature credibility. Specifically, as social media modalities gain strength in strategic communication efforts, researchers will seek to uncover the affordances, features and credibility within an online environment while uncovering a theory that helps explain the complexities of source credibility in online environments. Source credibility is an important factor as it can lead consumers toward product, service or message consumption and economic implications for organizations, businesses and enterprises.

With the growing number of sources, platforms and advertisers growing each day, credibility remains a complex set of variables to measure in an online environment. This research explores the utility of the MAIN Model (Sundar, 2008) for measuring technological affordances and heuristic cues available in online social media platforms and how it coalesces with a new breed of credibility measures. The MAIN Model is a descriptive term for evaluating the modality, agency, interactivity and navigability of a message or platform. There is no current research or scholarly work that has attempted to evaluate the technological affordances and the accuracy of the MAIN Model for explaining credibility of messaging in an online environment. This research is attempting to evaluate the MAIN model beyond the conceptual proposition and identify empirical evidence to its claim that increases in technological affordances help predict credibility in online environments.

The study explores the MAIN Model for assessing the perceived technological affordances available on the Facebook, Twitter and Snapchat. Additionally, participants have provided their perceptions of the credibility associated with each social media platform. This assessment will be used to determine and qualify acts of credibility based on two dimension; expertise and trustworthiness (McCroskey et. al, 1974). This research proposes psychometric scale for measuring the MAIN model across social media platforms and other web-based properties. Additionally, once this scale is proposed and tested, analysis can be produced to determine the predictive power of the MAIN model, or its constructs. The implications for identifying features that predict credibility for social media platforms have profound implications for future research, media-buyers, marketers, advertisers and scholars.  

In a review of the literature, the MAIN model was used as a supportive instrument across the majority of the studies. In-depth analysis has been conducted on the elements contained within the MAIN model, such as heuristic cues, individual affordances (Lee & Sundar, 2013; Kim & Sundar, 2015; Kim & Sundar, 2011) but an evaluation of the MAIN model as an aggregate or composite construct for predicting credibility has not been found in a review of the literature. 

 The research questions, hypotheses and arguments relied on the MAIN model as a surface-level argument for heuristic cues identified in digital media platforms and technologies, such as websites and social media platforms. The MAIN model as a construct for identifying heuristics, cues and affordances were not evaluated, measured or tested. Overwhelmingly, the authors relied on the heuristics identified by Sundar (2008) to formulate a research question or hypothesis and in some cases, tie it back to the results. For example, Lee and Sundar (2015) utilized the heuristic properties associated with the MAIN model taxonomy for evaluating the bandwagon cue and the authority cue. What isn’t evident in the research is a consistent representation of heuristics and taxonomy for evaluating digital media and technological affordances nor the use of the MAIN model in its entirety to evaluate or predict credibility in digital media properties. The taxonomy seems to be subject to the interpretation of the researcher for identifying the most convenient heuristic that best meets the needs of their research question or digital media platform.
The current research was guide by these questions:
RQ1a: Can reliable measures of each of the four technological affordances associated with the MAIN model be developed?

RQ1b: Do the psychometric properties of the technological affordance scales vary as a function of social media platform?
RQ2a:  How does the MAIN model and related technological affordances predict perceived credibility of Twitter?
RQ2b:  How does the MAIN model and related technological affordances predict perceived credibility of Facebook?
RQ2c:  How does the MAIN model and related technological affordances predict perceived credibility of Snapchat?

Method

Respondents completed an instrument that included demographic questions. Additionally, an instrument evaluated the participant’s social media usage and familiarity with social media platforms. From the social media utilization instrument, social media channels were selected based on the usage and popularity scores across the population as reported by the participants. Facebook, Twitter, Instagram and Snapchat, respectively were the highest utilized social media platforms across the sample population. Instagram was not included in the study due to measurement and validity issues. Utilizing the MAIN Model for technological affordances, a repeated measures instrument evaluated perceptions based on each unit of the MAIN Model, as they exist across four social media platforms; Twitter, Facebook and Snapchat. In addition to measuring the platform specific technological features or affordances, perceived platform credibility was also evaluated for the same social media platforms. The credibility instrument included a modified, two-pronged version of McCroskey’s (1974) credibility measures based on expertise and trustworthiness. Using this methodology, participants completed the instrument in reference to their perceptions of technological affordance and credibility for each social media platform.

The Technological Affordance Scale

The technological affordance scale is a 12-item instrument that asks respondents to report their perceptions on the individual technological affordances available in a specific social media platform. This instrument is multidimensional and shared across three sections that measured perceived technological affordance and features in Twitter, Facebook and Snapchat. The result of each section created a summary of the perceived technological affordances reported for each platform with respect to the feature affordances as described by the MAIN model (Modality, Agency, Interactivity and Navigability) (Sundar, (2008). This MAIN model composite calculation will help the researcher evaluate the perceived technological affordance levels for each social media platform. All responses were solicited using a 7-point Likert Scale ranging from strongly disagree (1) to strongly agree (7).

 

 

Modality

The modality unit is the most structural of the four affordances and the most apparent on an interface. Computer based media has complicated traditional modalities and now use the term multimedia (Sundar, 2008).  Modality was measured within the 12-item technological affordance scale as a 5-item sub-scale measuring modality perceptions on social media channels. This asked respondents to report their perceptions on the different modality affordances available in social media platforms such as video, animation, text, images or multi-modal (containing all modalities).

Agency

The agency affordance of digital media helps make possible the assignment of sourcing the particle entities in the chain of communication from the computer to an online source location or multiple locations (Sundar, 2008). Agency was measured within the 12-item technological affordance instrument scale as a 2-item sub-scale measuring agency perception on social media channels. Each statement prompted the respondents to report their perception on the different agency, source, or authoritative-like qualities in each social media platform.
Interactivity
           Interactivity affordances in digital media are capable of cueing a wide variety of cognitive heuristics and are the most distinctive affordance of digital media (Sundar, 2008).  The interactivity unit of the MAIN Model was measured within the 12-item technological affordance instrument as a 2-item sub-scale instrument measuring interactivity perception on social media channels. Each statement prompted the respondent to report their perception on the interactive features such as user-control and user-participation in each social media platform.  
Navigability
          The navigability affordance has the dual ability to directly trigger heuristics with different navigational aids on the user-interface, as well as to transmit cues through the content that it generates (Sundar, 2008).  The navigability unit of the MAIN Model was measured within the 12-item technological affordance instrument as a 3-item sub-scale instrument measuring navigability perception on social media channels. Each statement prompted respondents to report their perception on the navigability features such as menu items, scaffolding, buttons, and links in each social media platform.
Participants
          Participants were 420 undergraduate students (292 female) enrolled in communication courses at a south-central state university. The respondents’ ages ranged from 18 to 43 years (M = 20.22, SD = 2.59). 113 of the respondents were freshmen, 77 were sophomores, 121 were juniors, 105 were seniors and four specified as either non-degree seeking visiting/transient students. The demographic composition was similar to that of the university student population, with 344 (81.9% White, 44 (10.5%) African American, 14 (3.3%) Asian and the remaining 18 (4.3%) participants reported another ethnicity. Participants completed the survey for course credit in the College of Communication and Information using the SONA system.
Variables
          The dependent variable in the study is credibility and the independent variable is technological affordance. Technological affordance was measured as a part of the 12-item scale and credibility was measured as a part of the 2-item scale. Sampling procedures were conducted based on the completion of the survey, qualified respondents that were at-least 18 years old. The measures were individual affordances related to the MAIN model and the technological affordance scale aggregated across each social media channel. Additionally, a credibility scale was aggregated for each platform.            
         The instrument was repeated measures and was designed to ask the same items across each social media platform. The 12-item technological affordance scale was repeated each time across Facebook, Twitter and Snapchat. The two item credibility scale was also presented across each social media platform as well. This allowed for an aggregate measure for technological affordance per social media channel and an aggregate credibility scale across each social media channel.

Results

After calculating an aggregate score for technological affordance for each social media platform and social media credibility Pearson-Product moment correlation was conducted along with means, standard deviations and reliability.  The participants indicated their agreement with these items using a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). All items were positively correlated. Table 1 shows the correlation between the two scales as measured by composite for perceived social media credibility and the composite for perceived technological affordances for each social media platform.

Table 1. Means, standard deviations, internal consistency reliability and coefficients between social media credibility and technological affordances for social media platforms

Screenshot 2018-06-25 22.44.38.png

 

Regression

A standard multiple regression analysis was performed between the dependent variable (credibility) and the dependent variable (technological affordance per platform). Analysis was performed using SPSS REGRESSION. The following are results for social media platform affordances and social media platform perceptions of credibility.

Facebook

Regression analysis revealed that the model significantly predicted credibility as a function of agency and navigability for the Facebook platform, F(2, 410) = 33.98 p <.001. R2 for the model was .33 and adjusted R2 was .14. Table 1 displays the unstandardized coefficient (ß), intercept and regression coefficient (b) for each variable.

Twitter

Regression analysis revealed that the model significantly predicted credibility as a function of agency and navigability for the Twitter platform, F(2, 410) = 25.1, p <.001. R2 for the model was .39 and adjusted R2 was .151. Table 1 displays the unstandardized coefficient (ß), intercept and regression coefficient (b) for each variable.

 Snapchat

 Regression analysis revealed that the model significantly predicted credibility as a function of modality, agency and interactivity for the Snapchat platform, F(2, 410) = 41.48, p <.001. R2 for the model was .238 and adjusted R2 was .232. Table 1 displays the unstandardized coefficient (ß), intercept and regression coefficient (b) for each variable.
 

Factor Analysis  

Using the 12-item scale used for evaluating the technological affordances identified by the MAIN model were factor analyzed using principal component analysis with varimax (orthogonal) rotation. This analysis addresses RQ1a and RQ1b to confirm the measurement of the factors associated with the MAIN model. Table 2 displays a factor analysis for the Twitter platform. Similar loadings were found for both Facebook and Snapchat. Two question relating to the credibility associated with the social media platform were factor analyzed using principal component analysis with varimax (orthogonal) rotation on an individual unidimensional basis.            

Table 2. Factor analysis for MAIN model technological affordances for Twitter **

Screenshot 2018-06-25 22.44.25.png

 

 

Reliability

A reliability analysis was conducted to determine based on RQ1 and RQ1b if a reliable measure of each of the four technological affordances (modality, agency, interactivity and navigability) and aggregate scores for credibility in each of the social media channels were present in the scale. The results in Table () are the descriptive statistics for the mean, standard deviation and Cronbach's alpha for the four affordances and credibility across twitter, facebook and snapchat.

     Modality

Cronbach’s alpha’s had acceptable internal consistency for the psychometric properties in determining Modality for Twitter (α = .87, M = 6.11, SD = .96). Facebook Modality, (α = .91, M = 6.38, SD = .76) and Snapchat Modality (α = .83, M = 6.04, SD = .94).

     Agency

Cronbach’s alpha was lower in internal consistency for the psychometric properties in determining Agency for Twitter (α = .43, M = 3.56, SD = 1.34). Facebook Agency (α = .36, M = 3.79, SD = 1.30) and Snapchat Agency (α = .46, M = 3.28, SD = 1.45).

    Interactivity

Chronbach’s alpha was again lower in internal consistency for the psychometric prosperities associated with determining Interactivity for Twitter (α = .51, M = 5.76, SD = 1.06). Facebook Interactivity (α = .59, M = 6.14, SD = .94) and Snapchat Interactivity (α = .64, M = 5.40, SD = 1.41).

    Navigability

Chronbach’s alpha had acceptable internal consistency for the psychometric prosperities associated with determining Navigability for Twitter (α = .86, M = 5.74, SD = 1.15). Facebook Navigability (α = .88, M = 6.02, SD = .92) and Snapchat Navigability (α = .87, M = 3.62, SD = 1.87).

    Twitter Credibility

Chronbach’s alpha had acceptable internal consistency for the psychometric prosperities associated with determining credibility for Twitter (α = .77, M = 4.01, SD = 1.19).

    Facebook Credibility

Chronbach’s alpha had acceptable internal consistency for the psychometric prosperities associated with determining credibility for Facebook (α = .81, M = 3.99, SD = 1.27).

    Snapchat Credibility

Chronbach’s alpha had acceptable internal consistency for the psychometric prosperities associated with determining credibility for Snapchat (α = .86, M = 4.07, SD = 1.52).

Table 3. Descriptive statistics (mean, SD) and internal consistency (Cronbach's alpha) for MAIN and credibility per platform

Development of a Technological Affordance Scale  

 RQ1a was supported in an attempt at creating a psychometric scale based on the feature components of the MAIN model at a unidimensional level. When evaluated across dimensions, Agency and Interactivity as a construct were less reliable and have concerns for internal consistency when analyzed across the MAIN model in its entirety. Factor loadings were internally consistent when analyzed unidimensionally.

RQ1b was supported as the psychometric properties of the technological affordances varied across social media platforms. For example, Modality as a variable was internally consistent across Facebook, twitter and Snapchat, whereas Agency held less internal consistency across all three platforms. This might be due to the relative psychometric properties for how individuals assign perceptions based on the concepts of Agency and Interactivity as formulated in the scale. For example, some participants might not understand the concept of agency in terms of source credibility, as it might be a factor considered when clicking, viewing or engaging with a post. Furthermore, interactive features again might not be readily perceived by the user across any social media platforms. These features might be available, but participants might not be active agents in evaluating them as a part of the psychometric properties of the proposed scale. Additionally, the factor analysis states that the scale was uniformly distributed based on the constructs or factors for the MAIN model. This provides data that might suggest that the components of the MAIN, primarily Agency and Interactivity might be problematic to measure or record on behalf of a novice or standard user of social media. The model, assumes that the features are used, recognized and perceived, in order to assign credibility measures to the digital media property. This psychometric test might indicate that not all features of MAIN are in an individuals’ consideration set when experiencing a digital media property, in this case a social media platform. Future research should be conducted to evaluate the psychometric value and properties of both Agency and Interactivity and how it appropriately fits into a technological affordance theory. Are these two constructs perceived too similarly, or are there other feature affordances that could be integrated that explain how credibility is assigned across a platform?

MAIN Model for Predicting Credibility

            In the regression analysis it was found that agency was an overwhelming predictor of credibility across Facebook, Twitter and Snapchat. The aggregate technological affordance measure helped explain how much of the MAIN model is represented in each platform. This aggregate score was then regressed on the credibility measure for each social media platform. RQ2a, RQ2b and RQ3c all were concerned with how the MAIN model predicted credibility across each social media platform. Agency performed highest in terms of predicting credibility the three social media platforms. Agency was an important variable for every platform as the loads are higher for predicting the variance in credibility in social media platforms.

It was found that for RQ2a, which was concerned with the MAIN model predicting perceived credibility in Facebook that Agency (r =.317) and Navigability (r = .178) accounted by 14% of the variance. The remaining technological affordances were not significant or were weak for predicting the variance in credibility for Facebook.

RQ2b was concerned with predicting the perceived credibility for the MAIN model in Twitter. The data suggested that Agency (r = .237), Interactivity (r =.121) and Navigability (r = .202) accounted for 15% of the variance for perceived credibility. As with Facebook, Modality was not a factor for predicting the variance on the Twitter platform.

RQ2a was concerned with predicting the perceived credibility for the MAIN model found on Snapchat. The results of the data suggested that, alongside Twitter three factors predicted the variance in perceived credibility for Snapchat. Modality (r = .165) was the only platform that Modality was a factor, Agency (r = .385) was the highest of the beta weights for technological affordances within the regression analysis and finally, Navigability (r = .141) accounted for 23% of the variance for perceived credibility.

            Social media is made up of individual users, who rely on other users to share, post and produce content for consumption. Agency as a construct is an important component of the social construction of media found on socially driven platforms. This and the data suggest that, when considering a social media campaign, the features associated with Agency should be considered when deploying a message. Who, what and the verifiability of the information or sender as it relates to the audience all play a critical factor in how a message will be received as credible or not.  Additional research should be conducted to evaluate how individually agency predicts credibility on social media channels and if there are any additional components that contribute towards perceived credibility for consuming content on a socially driven network.

The data suggested by Snapchat, the newer of the social media platforms, revealed that modality, agency and interactivity were a function of credibility. This could be based on the simple feature sets presented on Snapchat. Alternatively, it could be based on the new features that Snapchat had made available at the time of this study. As a result, the users of the platform maintained a high level of awareness of those new features. This leads to the question, do all users of a social media platform use the affordances offered? It might be the case that some are unaware of the affordances presented on Facebook, but might be very well-versed in the affordances on Snapchat because of a recent introduction to new features in the product.

Limitations and Future Directions

 

This current study has several limitations. We only focused on social media platforms as a means to evaluate credibility of the constructs of the MAIN model. Future research should evaluate web experiences, and other digital media platforms beyond social media platforms. Additionally, the participants were asked to report their perceptions of the platform, rather than the content, source or message found on the platform. This presents difficulties in making assumptions about individual content derived from each channel. Users could have pre-conceived understandings, opinions and values associated with one social media platform over the other.

Additionally, the low internal validity and reliability for the multidimensional constructs for Interactivity and Navigability presents opportunities for furthering the creation of the psychometric scale. The reliability measures were evaluated unidimensionally across each social media channel is it related to Modality, Agency, Interactivity and Navigability separately. However, when ran as a composite, reliability issues were present, this can be seen in Table 3.  This will help to determine if the scale is the source of the low levels of internal consistency or if the MAIN model constructs for Interactivity and Navigability are problematic for scale development or psychometric testing. If the latter, additional research should be conducted to determine alternative psychometric constructs that can be more accurately articulated or identified as feature affordances in a web or digital media environment.

            In order to understand how individual features, promote, guide and predict credibility on digital media properties, more empirical research should be conducted on evaluating the propositions made by the MAIN model for technological affordances. Agency has been shown to be moderately critical in predicting credibility in three social media platforms, but little research has been presented on how the remaining constructs within the MAIN model account for credibility. Additionally, the MAIN model presents researchers with is the best conceptually designed construct for identifying all of the proper feature affordances for a user experience in a web environment. Can a better conceptual model be developed?

Agency, as an overwhelming predictor of credibility has positive implications for how advertisers, media-buyers and digital media strategist position and select content to display. But how can the affordances predict credibility in social media? Furthermore, noting the regression analysis, different affordances are predicting credibility at different strengths as the social media platform changes. For example, generating content that elicits cues of agency on the Facebook platform might be more positively perceived, whereas eliciting cues of modality or interactivity might not yield different results. Likewise, for Twitter, which Agency is an overwhelming predictor for credibility, but navigability is not as important. According to these results and the predictive power of the MAIN model, the results are inconclusive for positing the predictive power for the MAIN model as an aggregate construct for understanding more about how credibility is a factor in a digital media environment. It still can be said that media planners can plan to deploy different content with separate or distinct cues across various social media platforms, however, to claim that the MAIN model is a distinct predictor for credibility in specific social media channels cannot be supported based on the data collected in this study. As such, additional research needs to be conducted to determine how the MAIN model as an aggregate construct can provide evidence for predicting credibility.

            The MAIN model as it is currently positioned has made propositions that it cannot fully support. Meaning, no empirical research or evidence has been developed that explains the feature affordances identified by Sundar (2008). Furthermore, the heuristic taxonomy used across research that utilizes the MAIN model is troublesome. Meaning, the convenient approaches to cues, heuristics and affordances are too precarious and extensible across studies to be useful towards and actionable body of knowledge. This study attempts to qualify the propositions presented by the MAIN model for predicting credibility in the presence of technological affordances in web-based environments and in this case, social media platforms.

The MAIN model is a useful mechanism for evaluating the technological affordances in a digital media environment. What does this say about technological affordances? How do the technological affordances in a digital media environment differ, or relate to the affordances presented to humans in a physical or ecological environment? The theory of affordances, proposed by Gibson (1977) states,” The affordances of the environment are what it offers to the animal, what it provides or furnishes either for good or ill.” (Gibson, 1977). Gibson lists four properties for affordances; horizontal, flat, extended and rigid as physical properties of a surface. Gibson states that these affordance properties are relative and must be measured by the animal. As such, an affordance cannot be measured as it is in physics. Gibson provides this as an illustration that affordances are subjective and objective to the individual. That is, to perceive an affordance is not to classify an object. This does not discount the taxonomy given by the MAIN model for classification of features of a digital media, but rather provides a lens for further evaluation and research how affordances are perceived and assigned.  This could lead towards a theory of technological affordance, rather than a predefined set of constructs that are problematic to evaluate, measure and have weak predictive power.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References

 

Chemero, A. (2003). An outline of a theory of affordances. Ecological psychology, 15(2), 181-195.

Fogg, B. J. (2003). Prominence-interpretation theory: Explaining how people assess credibility online. Paper presented at the CHI'03 extended abstracts on human factors in computing systems.

Gibson, J. J. (1977). The theory of affordances. Hilldale, USA.

Kim, H.-S., & Sundar, S. S. (2011). Using interface cues in online health community boards to change impressions and encourage user contribution. Paper presented at the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada.

Kim, K. J., & Sundar, S. S. (2015). Mobile Persuasion: Can Screen Size and Presentation Mode Make a Difference to Trust? Human Communication Research.

Lee, J. Y., & Sundar, S. S. (2013). To Tweet or to Retweet? That Is the Question for Health Professionals on Twitter. Health Communication, 28(5), 509-524. doi: 10.1080/10410236.2012.700391

McCroskey, J., Holdridge, W., & Toomb, K. (1974). An instrument for measuring the source credibility of basic speech communication instructors. The Speech Teacher, 23(1). Retrieved from http://www.tandfonline.com/doi/citedby/10.1080/03634527409378053

Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 72-100). Cambridge, MA: The MIT Press

 

Making a "Major" Decision

A life decision

Making a career and academic major decision can be a difficult and onerous task for a soon-to-be college student. Especially given the growing number of options and possibilities for degree programs and careers available across higher education and the global marketplace. The debate regarding declaration of a major equals timely graduation is far and wide. It’s clear that not every decided student changes their major, but research has shown that even declared students are just as uncertain as their undecided peers. Our focus now becomes on creating tools to help students make smarter career and degree program decisions. Providing the tools and technology is a part of the step to aid in the self-discovery process as students learn about themselves and their interests, values, skills and personality will establish a greater foundation for making a sound decision. A new tool should help provide a framework for counselors and advisors at all levels of the student lifecycle to engage with the student to have meaningful conversations about selecting a major. Likewise, it should provide the student and their family the opportunity for self-discovery based on interest and skills. The challenge is being both a diagnostic tool, while providing university specific major exploration opportunities in an attractive and enjoyable web platform. This post will walk you through the challenges, research, opportunities and innovations we experience while working on the Academic Exploration Tool. 

Diagnostic vs. Exploratory

There are a host of wizards, tools and diagnostic instruments to help you vector in your skills, abilities, interest and goals to select a career and/or major. For example, most have taken the Myers Briggs Type Indicator (MTBI) which is an instrument designed to self-assess and report psychological preferences in how people perceive the world and make decisions. Additionally, there are tools available at http://www.humanmetrics.com/ like the “Jung Typology Test”, which helps students discover which career choices and schools are the most suitable for their type. There is also a collection of interactive personality tests that range from pure empirical diagnostic tools for scientific discovery to entertainment-based tests. The Holland Occupational Themes is a personality test based on the work of John Holland that focuses on career and vocational choice. This test consist of 48 tasks and ask the user to rate how much they enjoy performing each task. ACT and College Board also offer career and major exploration via their individual appliances online for narrowing down a career and/or major decision. The University of Tennessee Knoxville has developed a resource called “What Can I Do With This Major?” (WCIDWTM), which is an online resource, as well as, a series of PDFs that Universities can subscribe to that provides access to career keywords and areas based on individual majors. A simple google search indicates that several institutes of higher education leverage this tool. In our initial research and discovery with our own advising network, we found the WCIDWTM tool to be highly utilized across our campus.  Additionally, tools like the Occupation Outlook Handbook by the Bureau of Labor and Statistics were utilized. All of these tools were leveraged in different ways and by different institutions. In our research, we found that there were close to 2-3 tools used in tandem to advise a student on selecting a career and/or finding a major. Most notably, our own internal major sheets, the WCIDWTM and the Occupation Outlook Handbook were the most utilized tools for advising undecided or exploratory students. This required multiple sheets of paper or PDFs (our own major sheets), a second screen with the WCIDWTM page pulled up as well as tertiary pages for the Occupational Outlook handbook (see graphic below). All of this information was vital in the career to major exploration process. This left us asking the question, “How could we combine the great tools that were being used for counseling, advising and exploration into an integrated, data driven, attractive web platform to be used across the lifecycle by both students and university members without it being a wholly diagnostic scientific instrument?  

What’s out there?

Many universities have addressed these issues, but with different approaches and varied technologies. Some have created a wonderful resource for quickly access their degree programs by subject, type and alpha-split, while others have won awards and created a platform that is truly transcending and inspiring for how we approached our platform. Clemson University  and Arizona State University have put together the most impressive degree search platforms across R1 higher education institutions. Some of the noticeable differences are the aesthetics. Primarily, the content strategy for both platforms is quite impressive. The organization, consistency in content types and the shear volume of information organized into one platform provided a great direction and heading for our project. However, we knew that we wanted to include the four areas above, to disrupt the idea of utilizing four screens and outdated and outmoded major sheets. We have yet to see the combination of these ideas across higher education, which puts the Academic Exploration Tool in a unique position that combines content strategy, data and web content management like no other platform in the world. Our content strategy was to encouraged all programs to include student images, youtube videos, and headshot images of the contact for that particular program.  From a technical perspective, we’re bringing in three important automated data sources to execute the heavy lifting. First, we’ve indexed and imported over 5,000 career keywords associated with our degree programs. Secondly, we tapped into the open API from the Bureau of Labor and Statistics, allowing each program to select from hundreds of featured careers available on the Occupational Outlook Handbook. Thirdly, we are introducing a new registration, planning and auditing system on our campus within our portal environment called myUK: GPS (Graduation Planning System). This system will catalog all of our degree programs within our enterprise resource planning system (SAP). A web service will be created to feed the academic exploration tool with each academic degree program information, we’re calling major templates. This data will be leveraged via a web service from an SAP staging area to drupal, which will allow for dynamic curriculum information, providing course related information from freshmen first year to senior last year. This will also incorporate information down to the course description by a simple click or tap. This curriculum web service is a unique feature which will allow for us to begin the process of discontinuing the creation of physical major sheets. Now, using the web service and the dynamic program description information, we can dynamically generate a PDF on the fly - essentially replacing the process of manually creating a major sheet in InDesign or Photoshop. This is a game changer. Not only for our internal staff for retrieving just-in-time data and information, but from a workflow perspective.

We Love a Good Challenge

The University of Kentucky is a research 1 institution that holds over 16 colleges, a graduate school and 93 undergraduate programs, 99 master's programs and 66 doctoral programs and four professional programs. For our students, this has resulted in a co-curricular, interdisciplinary undergraduate experience. To contribute towards the rich academic experiences available, UK is 1 of 8 universities in the nation that contain academic and medical campuses on one contiguous campus. So the challenge and importance to represent our academic offerings were of the utmost importance. At the time, there was no one way to search and explore majors across the university, other than the collection of major sheet PDFs sorted by college and alphabetically. This format mimicked the action of walking into a lobby and selecting a major sheet print-out from a carousel. This process dates back to the turn of the century. We simply brought it over in a similar style to the web. We needed to fix the process of exploring a major and communicating curriculum, before we could change the way we organize the information. The creation of the curriculum on top of an informational major sheet was inefficient and cumbersome. Every change, edit or modification required a graphic designer or typesetter to make the edit, publish the document and replace it on the website. Furthermore, the curriculum was not dynamic, it was not managed by a database, source or repository. Career exploration was disjointed. Our advisors, counselors and students shared with us the process for finding a major, exploring a career and finding a point of contact with either, required multiple searches, screens and web resources. To that end, the consistency of information was important as this concept needed to scale across the enterprise, while still maintaining a consistent experience across each program. To make sure that this process was indeed, scalable, the colleges and departments needed control of the content and their pages. And of course, given our charge, this project needed and indeed was, research driven based on STAMATS research from 2012, 2015 and multiple years of the e-Expectations Report (Ruffalo Noel Levitz, 2014, 2015) as well as interviewing and gathering feedback from our own prospective and current students, staff and faculty.

 

We started started with a problem, established in 2012 as we migrated over 7 web properties from flat HTML pages to the growing content management system, Drupal. The platform was in v6 and we were the first on campus to introduce this system in it’s entirety as our preferred web communication platform. Fast forward 4 years later, we knew we needed a creative solution for delivering our degree program information in a dynamic, exciting and attractive manner. Equipped with some early wireframes, research, feedback and a bounty of ideas, the creative folks at Up and Up delivered an experience that was easy to use, understand and was driven by multiple points of research and data sources. They asked themselves, “If I wanted to be an astronaut, where would I start?” The result was a platform that we’ve been able to scale, grow and improve. A platform, that is equally informative and exploratory for a prospective students, current students and can be used as a prescriptive tool to aid admission, advising and career professionals across campus. Our feedback across campus has been phenomenal, you can see and learn more here from this short we developed about the success of the Academic Exploration Tool and our new custom built tool, myUK: GPS. https://www.dropbox.com/s/dyw0i3q5s5f1crt/final_aet_gps_shorts_hd_sound.mov?dl=0

The basic premise of the site is that a user can search by 4 basic avenues; first, the non-cognitive phrases, that continue to grow as we learn more about how the college's associate skills and interest with their majors. These are phrases like, “I like to help others”, “be creative”, “solve problems”, “experience new places and cultures” and 7 more phrases. Each program selects 1-3 of these phrases to associate with their major. As the platform grows and the non-cognitive interest bank evolves, we will continue to add more. The second way in which you can explore is by career interest and keywords. The foundational keyword repository is based on over 5,000 keywords associated with career to majors, based on the WCIDWTM resource. We spent time over the summer, with the help of some talented student interns, mapping career keywords with each one of our undergraduate programs. From there, we imported in these keywords as searchable taxonomy within the platform. Now, users can search on these keywords  - now with autocomplete! These keywords continue to grow, with each revision, college level authors are constantly adding new keywords to the database, making the search experience more diverse and accurate. The third option, which is based on the program-level keywords is a search for the actual degree program, words like “Nursing”, “Biology” or “Engineering”, this is very similar to a search-engine type query. Lastly, users can navigate directly to all or select program types by querying results for all programs such as “Undergraduate”, “Minor” or “Preprofessional”.

 

 

The results page required us to think differently about the photography, tags, and description. Where we re-wrote each program description and kept the copy down to 3-5 sentences rather than 3-5 paragraph that were displayed on a major sheet. Now, we were faced with the idea that a user would have to scroll close to 3-4 times just to read a description, we wanted to limit this significantly to create a better user experience.

The individual program pages, although all of the content is managed and provided by the college content author, we have some simple requirements to keep them within the content framework, but still provide them with flexible content areas for zip-downs, paragraphs with custom headings. Each flexible content area provides them with WYSWYG editor, where they can bring in custom graphics, photography, youtube and vimeo embeds, etc. The revisiting process has provided the college level authors with the autonomy to customize and be creative with their pages, while reducing their burden to worry about broken images, code or policy issues as all approvals are made by the Registrar area on campus. We’re averaging 10-15 revisions on a normal week and leading up to an event or campus wide changes (registration and senate approval of degrees) we see close to 100 or more. The revisions range from updated student profiles, feature career changes or basic keyword updates.

Data Driven Career Exploration

We knew that the career exploration component was disjointed. After meeting with our own Career Center professionals, they have the same process as other advisors, where they are referencing the BLS.gov website, while comparing and contrasting majors and curriculum sheets for each program. In many cases, the program major sheet did not include career keywords or information on the document. We knew that BLS.gov provided a great open API.

 

Users have the option to choose from hundreds of featured careers with the option to showcase median salary, number of jobs, 10 year job outlook, word environment and similar occupation. College content authors have the choice to select all or none of these data points to display alongside their featured career. We wanted to ensure that it was clear this information was being provided directly from the Bureau of Labor and Statistics, rather than data produced by the university, so we’ve included reference links and a basic BLS.gov mark to indicate the original source of the data.

Continual Innovation

We’ve helped to shape the way we communicate degree programs across the enterprise, by patiently listening to our users and content authors. This platform has fundamentally changed the way programs advise, counsel and market their academic degree programs. Each program has its own inquiry form that feeds directly into our Constituency Relationship Management system for follow-up communication and engagement. We have seen creative uses of the platform that we never even fathomed, where programs and content authors are continually innovating across a platform that is extensible, flexible and fun to use. Each person that encounters the platform, sees new and exciting possibilities for how to organize similar information. For example, how could we position and help students discover Student Organizations in a similar manner? Could each Student Organization have access to edit and update their own organization page? These are certainly all possibilities on a platform like this. Furthermore, how can we improve the alumni to program connections, utilizing data pulled from services like Linkedin. Could we have a featured section on the College of Engineering that highlighted recent graduates’ Linkedin profile on a regular basis? These are integrations that will truly extend this platform to become the linkage and resource to new academic discoveries, career and professional connections and life-decisions that can be both assisted or independent.

As we expand into building our Graduate, Doctoral and professional programs and even online programs and certificates, the platform will bring in a new set of users, students and experiences. As of today, there are over 50 active content authors and this number will continue to grow as we expand the platform across the enterprise. Soon, we will have one dynamic repository for academic offerings at the University of Kentucky. The data linkages with myUK: GPS will be revolutionary across higher-ed, bringing in affordances and innovation that will be game-changing and streamline manual human processes. As we consider how students, advisors and authors utilize the platform, it’s our job to continue to improve, tweak and modify the content to meet the needs of our users, luckily, we have a flexible and robust platform that can grow as we do.


 

 

Harvard’s CS50: Analysis of a Learning Culture

Harvard University is home to the popular Introduction to Computer Science (CS50), a course available from the Computer Science Department. This course covers the introduction to intellectual enterprises of computer science and the art of programming.  In 2007 new instructor, Dr. David Malan reengineered the course with a new pedagogical spin.  Prior to Dr. Malan, CS50 had an enrollment of 132 students. In the span of four years, class enrollment has soared to over 600 students. CS50 has been historically delivered in a traditional face-to-face lecture hall on Harvard’s campus in Cambridge, MA. Today, it is offered through a variety of delivery methods.   Beyond the face-to-face format, CS50 is delivered online via https://www.cs50.tv and as CS50x through edX, a massively open online course platform created by Massachusetts Institute of Technology and Harvard University to offer university level courses in a wide range of disciplines (Lewin, 2012).

When Dr. Malan, a Harvard graduate himself, was asked to reengineer the CS50 course, it held a strong stigma as a class reserved for individuals concentrating in Computer Science (Oblinger, 2012). In addition to being held in the historic Sanders Theatre on Harvard’s campus, the course materials, lectures and sections can be followed by anyone able to access the course website.  As a result CS50 transposed the materials, lectures and resources from the open course repository, normally accessed by traditional students, to a structured management system hosted by edX. For Harvard undergraduates CS50 counts for both the Gen Ed requirement in Empirical and Mathematical Reasoning and for the computer science concentration.  Conversely, through edX students are offered a HarvardX certificate of completion upon finishing CS50x.

Prior to 2007, the evaluations and feedback had been increasingly uninspiring, regardless of the faculty teaching CS50. When Dr. Malan planned to reinvent the course, he did so with purposeful intentions and clear academic expectations, combined with extensive and robust online and human-support resources. Dr. Malan has even taken into account the physical environment by creating a fun and inviting atmosphere: each lecture begins with hand-selected music from artists like Skrillex and Kanye West (Oblinger, 2012).

The following report will review CS50 in both the traditional classroom and online modalities. Measures of quality will be used through Fink’s (2003) Model for Significant Learning, Quality Matters (QM) (2011) and Bloom’s Taxonomy (n.d.).  Through this exercise, a model will emerge for reenergizing an inoperative model for instruction in both traditional classroom and online instruction for which higher education institutions can apply to antiquated course designs.

Course Overview

CS50 is an introduction to the intellectual enterprises of computer science and the art of programming for majors and non-majors alike, with or without prior programming experience (Malan, 2013). An entry-level course, CS50 teaches students how to think algorithmically and solve problems efficiently. Topics include abstraction, algorithms, data structures, encapsulation, resource management, security, software engineering, and web development. Languages include C, PHP, and JavaScript plus SQL, CSS, and HTML. Problem sets inspired by real-world domains of biology, cryptography, finance, forensics, and gaming. As of Fall 2012, the on-campus version of CS50 is Harvard's largest course (Oblinger, 2012).

According to Quality Matters(2011) standards 1.1 through 1.8, the course syllabus for CS50 both meets and exceeds the communication of the course components, structure, policies and minimum prerequisite knowledge required to be successful in the course.  According to Fink’s (2003) model, CS50 introductory syllabi offers and explains the situational factors students will encounter in the course. The syllabus clearly identifies the subjects and learning goals beyond learning program languages.

One very unique aspect of the newly designed CS50 course is the acknowledgement of prerequisite skills for prospective students. The syllabus even offers varying degrees of textbooks designed for moderately comfortable programmers to individuals that are just beginning the art of programming.

Learning Objectives

The learning objectives are clearly stated for the types of programming languages that will be covered. However, this class focuses less on teaching a student how to be a good programmer, but rather how to think programmatically. Dr. Malan’s primary objective is to teach a student to think algorithmically and solve problems efficiently.

The course expectations charge students to attend or watch all lectures, class sections, submit problem sets, complete two quizzes and submit the final project. The CS50 syllabus clearly outlines the core course objectives. Approaching CS50 from an instructional design standpoint and measuring against Fink’s (2003) taxonomy of “Significant Learning”, the course objectives, assessments, structure, resources and technology satisfy the six major types of significant learning.  For example, CS50’s course design component fulfills the ‘application’ section of the pie, which includes skills, thinking and managing projects.

Assessment and Measurement

CS50 may be taken either for a letter grade or pass/fail. The syllabus clearly outlines the subjects that are covered in each class session. Sections, quizzes and final project – Fink (2003) categorizes this as forward-looking assessment for which the entire premise of CS50 is built upon. The final project states that the end product should be relevant well beyond the life of the course.  Measuring CS50 against QM standard 3.2, the grading policy is clearly stated in the course syllabus.

Instructional Materials

Prior to CS50, it was very rare for a face-to-face course to host such a robust online presence for resources (Oblinger, 2012). For most online courses, materials are password protected or hidden behind a learning management system tied to a student’s institutional credentials. For CS50 and CS50x, the materials are by and large, open to the public. In the end, this feature set the course apart and is why it is successful. There are a spectrum of digital tools, which includes a course website that hosts lecture, section, and seminar videos, virtual office hours, anonymous bulletin boards, lecture notes, an evening telephone hotline, FAQ’s and links to helpful materials related to projects.  As a part of the student course evaluation, comments like, “Most organized course ever” emerged and were recorded for the Fall 2012 semester (Oblinger, 2012).

Learner Interaction

Measuring the learner interactions through the Quality Matters rubric (2011), the learning activities within CS50 accurately promote the learning objectives. Each CS50 Teaching Fellow, or teaching assistant, is equipped with tablet devices for rapid grading. Beyond grading, the teaching fellows are highly utilized within class transactions.  Examining Quality Matters standard 5.2, the final group project accurately promotes learner interaction that supports active learning.

Course Technology

It is expected for a computer science course to leverage certain technologies. However, the pervasiveness surrounding the technological resources for student success is well beyond the academic status quo.  From tablet wielding Teaching Fellows, to recorded lectures in both MP3 and MP4 formats, and an after-hours telephone hotline, the use of course technology sets this course apart from traditional undergraduate lecture hall classes. From an online perspective, the course repository is simple, but effective. Located at cs50.tv, the lectures, sections, problem sets, quizzes and seminars are easily accessible complete with an archived version of the CS50 soundtrack available on iTunes.

Beyond the course technology, Dr. Malan has worked with Harvard’s information technology department to open up data sets such as the bus schedule and cafeteria data so students can apply real world data to class projects(Oblinger, 2012). This brings a sense of reality to the problem sets and final projects – contributing towards active engagement by solving real campus issues.  As a result of the increase in class enrollment, core changes had to take place with the university wireless infrastructure due to the increase in laptop & tablet use.

Although technology is omnipresent in CS50, it is not chief in the success of the course. This was accomplished by a clear goal of Dr. Malan to revitalize this course through strict pedagogical goals and reimagining the course as a whole combined with independent and open support structures. 

Learner Support

When measuring the learner support based on the Quality Matters (2011) rubric, CS50 rated very poorly, which paradoxically is the reason why this course is so highly revered amongst students and researchers. Objectively measuring based on the language of Quality Matters standards 7.2, 7.3 and 7.4, CS50 did not rate a score based on the verbiage on QM’s rubric. Based on the behavioral reactions and course assessments for CS50, this measurement seems to be an anomaly for this course. The language and verbiage did not lend itself to measuring courses that have an independent support system separate from university learning support outlets, like www.cs50.net and the team of Teaching Fellows that assist Dr. Malan in teaching and grading the course.  Indications that this course includes adequate support structure are evident on many levels. This level of support exists in dedicated Teaching Fellows, a robust website and a network of connected students. Additionally, the provisioning of open and online resources through unrestricted domains (cs50.net and cs50.tv) helped to create a clear pathway for Harvard students to access the material. Beyond that, open online resources expanded the course to the world many years before it was incorporated into edX’s offerings.

Accessibility

In terms of accessibility, the course materials were easy to find, query and read. The course repository website hosts a Google translation tool and offered a text-only feature satisfying QM standard 8. The text only feature offers alternate outputs converting the page into text only for screen readers. To be clear, CS50.tv is not a learning management system in the traditional sense; it is simply a content repository. In this way, the learning materials and resources are clearly marked, categorized and accessible.

Quality Measures and Taxonomy

How did CS50 score when evaluating using the Quality Matters 2012-13 rubric? After carefully measuring CS50 based on the course syllabus, website, objectives, materials and academic reviews, it scored an 87/95. Yielding an overall score of 91%. The course failed to meet some of the traditional measures included in the rubric, primarily accessibility. This particular standard failed to realistically measure CS50’s expansive level of accessibility.  According to Quality Matters, a course is never described as having "failed" a review; it is always "in the process of meeting standards" and is described as such until it meets the QM rubric standards at a rate of 85% or better (Quality Matters Program, 2011). So, according to the score, this course would have clearly met the standards. The words failed are not used when measuring online courses with the Quality Matters rubric. It either meets or does not meet the standards. The goal with QM is continuous improvement.

             The unique feature sets of this class could not be accurately captured in the point value system for quality matters. However, per Fink’s (2003) taxonomy for Successful Learning, CS50 fulfills every element.  When reviewing CS50 based on Bloom’s Taxonomy (n.d.)? CS50 met all of Fink’s taxonomy for Successful learning when considering the course objectives, assessments and resources. Likewise, when aligned with the three core domains within Bloom’s taxonomy, CS50’s seems to fair well. The cognitive learning structures promote new knowledge in computer science while applying principles to real world applications. Within these cognitive disciplines, analysis is aided by the teaching fellows and evaluation is conducted in the form of quizzes. 

Would the quality of instruction be high without technology? No, this class thrives on leveraging technology to deliver its course content. Yes, technology is necessary in this review, maybe not as much for face-to-face – but according to researchers and data from class assessments – this is the resource that is most liked.  The effective parameters of CS50 are accomplished through the interactive nature of the lecture and seminars and through social discussion on and offline. As most of the data sets and problem-based scenarios are pragmatic: valuing, organizing and characterization become key growth points for a student in CS50.  Problem sets are introduced early in the course, which challenge students to craft projects using programing software, languages which require critical reasoning and algorithmic analysis. Through eight tiered problem sets, students acquire psychomotor competencies for which they develop various technical and analytical skills as a result (Bloom’s Taxonomy, n.d.).

Conclusion and Recommendations

As traditional public universities seek to find new and innovative paths to connect students with content and teachers, as well as make large classrooms more relevant, intimate and engaging – we must look at courses like CSS50 and instructors like Dr. David Malan. Harvard University and Dr. Malan committed to purposefully design a course that transcends major concentration, genres and departments. The quality is clearly calculated by standards, but at root are the course design, pedagogical makeup through project-based learning and a strong relevance to real world problems and opportunities on and off campus.

The overall course value is explained through the Quality Matters (2011) score, but this is not the only beneficial portion. The enthusiasm associated with this course in both face-to-face and online environments have created what people call the “CS50 effect” (Oblinger, 2012).  It seems that experiential course design has been missing for some time. This is a factor that existed within CS50, which could not be empirically measured, by any rubric, model or design standard. This attribute, contributes towards a subject and class developing a personality for which students can connect, engage and obtain knowledge that will be shared, cultivated and applied.  The role of technology within the build for this course is absolute. Without leveraging open resources and in-time lectures, the relevance and applications would never be accomplished. However, technology aside, this is where a new model emerges for education technology enthusiasts.  There are many ways in which a course can be revived and produce successful results. However, for CS50 a bolstered infrastructure, successful technology integration and vast open online resources, are simply elements towards a bigger solution. Technology is simply an avenue where great instructional design; passionate support structures, experiential learning and shared enthusiasm lead to greater learning for students in all modalities.

References

Bloom’s Taxonomy. (n.d.). Retrieved April 1, 2013 from Wikipedia: http://en.wikipedia.org/wiki/Bloom%27s_Taxonomy

Fink, L. (2003). Self-directed guide to designing courses for significant learning. San Francisco, CA: Jossey-Bass.

Lewin, T. (2012, May 12). Harvard and m.i.t. team up to offer free online courses. New York Times. Retrieved from http://www.nytimes.com/2012/05/03/education/harvard-and-mit-team-up-to-offer-free-online-courses.html?_r=0

Malan, D. (2013).  CS50 website. Retrieved from https://www.cs50.net/

Oblinger, D. (n.d.). Case study: Cs50 at harvard. (2012).Game Changers: Education and Information Technologies: Educause, 361-367. Retrieved from http://www.educause.edu/research-publications/books/game-changers-education-and-information-technologies

Quality Matters Program. (2011). Quality Matters rubric workbook for higher education (2011-2013 ed.). Annapolis, MD: Maryl and Online, Inc.

The MAIN Model for Determining Technological Affordances and Credibility in Social Media Platforms

Abstract

            Online advertising is a billion dollar industry that continues to grow due to mobile device adoption. Social Media is an increasingly contributing factor to the growth of mobile advertising expenditures. A prominent issue facing advertising scholars, marketers and publisher is source credibility in an online environment. The overall ambiguous nature of Internet based communication has left researchers, marketers and consumer with little to measure in terms of source, platform or feature credibility. Specifically, as social media modalities gain strength in strategic communication efforts, researchers will seek to uncover the affordances, features and credibility within an online environment.  Message credibility can lead toward product, service or message consumption. However, with the growing number of sources, platforms and advertisers, credibility is a complex set of items to measure in an online environment. This research explores the utility of the MAIN Model for measuring technological affordances and heuristic cues available in online messaging and how it coalesces with a new breed of credibility measures. The first part of the study explores the MAIN Model for assessing technological affordances available in social media modalities. This assessment will be used to determine and qualify acts of credibility based on two dimension; expertise and trustworthiness. This is an attempt at creating a credibility scale for social media modalities and technological features and affordances. This research proposes a new conceptual framework for understanding message credibility in a social media environment for the 21st Century. The implications for identifying credibility measures in social media modalities impact both personal and commercial applications to leveraging social media platforms. This particular study has implications for marketers, publishers and advertisers leveraging modalities across facebook, twitter, instagram and tumblr.

Veteran MOOC

I've spent a considerable amount of time researching, reviewing and evaluating the advent of Massively Open Online Courses. My masters degree, in Instructional Systems Design I've assisted the University of Kentucky with recruiting and enrolling record numbers for their MOOCs, Intro to Biology, Psychology and College Readiness. I've met with David Malan and the CS50 staff (the worlds largest MOOC) and even wrote a review of their delivery, technology and how it builds a rich learning culture - even in an online environment. 

I'm faced with a larger challenge, combining my passion and interest with service as a veteran with instructional design, technology and policy. We were recently awarded a sizable grant to aid in the development of the Veteran College Transition MOOC. 

My first job following graduation was with the Louie B. Nunn Center for Oral History, developing a project that interviewed combat veterans. Throughout my time interviewing, I noticed a theme across all interviewees in which they all had a rough and troubled transition from active duty military into higher education. It mostly stemmed from their disdain of TAPS (Transition Assistance Program). After my 2nd or 3rd interview was to help tease out some of the bigger issues with veteran transition and the noticeable problems with the curriculum and preparation strategies used. 

Now, we're in a serious position to make a difference. Equipped with a grant, camera and studio equipment and a serious love for instruction design and educational technology, we're seeking to make a real difference. 

At this point we're still developing modules and are looking for input from all angles, sources and experts. Ideally, I'm currently searching for veterans that have recently transitioned, veterans that are interested in edtech, instructional systems design or higher ed. We have a thread on reddit/r/veteran to facilitate the conversation, input and contributions for material, subjects, resources and curriculum design. We'll start filming in approximately three weeks and input and contributions will be heard. So, please, take a moment and sound off. Our goal is to make this project a truly open-sourced class for veterans and military service members across the world, designed and delivered by veterans, for veterans. 

Communicating Across Small Teams

 AOL chat rooms circa 1990

AOL chat rooms circa 1990

Remember when the world was a simpler place and online communication was in the form of 'rooms', rather than proprietary, clunky individual chat threads? I remember AOL chat rooms (A/S/L) were the only way to communicate. It wasn't until later in the AOL desktop experience that individual, or private chats were introduced.  Today, I might have 3-4 different persistent/concurrent text chats taking place across all of my devices. For work, I use Microsoft Lync, which is integrated into the Exchange system, which is great for working across those clients. However, their iOS, Mac and handheld products are lacking in serious ways. The system does not archive or categorize chats, and the connection is not persistent when moving across devices. Meaning, threads and conversations are lost when you close the window or app. 

File Index

Our team heavily relies on acute communication to accomplish projects in a short amount of time. Most of our projects are not asset or personnel heavy. Most of the items are narrative based and can be accomplished through a variety of files, images and link sharing. We explored the typical top 10 list of online project management tools. Basecamp was too robust, Asana was too bullet and task based and provided little narrative exchange. We needed a narrative, text based client that was persistent that handled and managed links, files and mobile access with ease. 

optimal.png
image.png

HipChat, an Atlasssian product, quickly floated to the top, as many other similar persistent chat clients provided this service, their product fit our needs perfectly. Now, our projects, initiatives and conversations are grouped into rooms, with private chats between team members listed alongside the group chats. The software is slick and has a simple UI.  It even renders hex codes when referencing a certain web safe colors. The Mac client is zippy and reliable, as is the iOS, Android and Windows based client. A great feature is tagging or adding an individual to the chat using the '@' and the users name. This allows for cross conversational dialogue. The slash commands help save time when sharing code and is a great trick to save a few mouse clicks. We also love that it renders animated .gifs and you can upload your own emoticons. The platform has virtually eliminated email for our 5 person team, allow for us to message each other when away from the desktop client, which pushes a notification alert email. Additionally, each thread pulls in and indexes the files and links shared making it easy to search and query past conversations. We'll continue to explore new ways to collaborate from afar, but with HipChat consistently updating their client, most recently with audio and video chat - which works great, this service is perfect for small or large teams that need to communicate on the fly. 



The Onboarding Experience

What is the perfect on-boarding experience look like for handhelds? The user experience is everything when lending up your credentials and the back-end and front-end design need to reflect a smooth and seamless integration of design, data collection and interaction. We've explored some innovative ideas with prospective student on-boarding experiences, but the new Beats By Dre Listening Experience is an exceptional example of the combination of user-centered design and device-specific input fields. 

Google+ EdTech Community

When Google announced the creation of the community feature on Google+, I knew it would be a special space that would allow for widespread connection across folks in specific domains. Facebook was too closed in their groups approach. Although Twitter's hashtags create a great feed for specific domain knowledge and communications, the interactions were too thin in terms sharing multimedia, stories and creating a community around one particular idea. Enter Google+ Communities. More than a year old, Google+ Communities was announced  in Dec 2012.  I sat feverishly pinned to a browser tapping F5 waiting for the rollout to hit my account. When available, I created the EdTech Community. Pulling from the language and logo I developed for a denied EdTech unconference proposal the community was minted with fresh marketing and a pointed mission. Within the first hour, Andrew Hill and I watched the community grow to over 300 member reaching close to 1,000 by the morning. The following month there were close to 3,000 members. Fast forward a year, the community boasts close to 16,000 followers. 

Working closely with Todd Hurst, a fellow PhD student, we encouraged new followers to share their stories as educators, technologists, and enthusiasts about how technology can positively impact education. The initial layout allowed for the categorization of posts, barring the member tagged the appropriate pre-selected subject. The platform has afforded us the opportunity to interview amazing educators and thought leaders via Google+ Hangouts. Our first hangout was with Nick Provenzano, a high school english teacher and ISTE 2013 Outstanding Teacher of the Year, who is most known for his work with Evernote in the classroom. This was followed by an interview with John Nash, the Director of the Design Lab for Education at the University of Kentucky. We followed with a great conversation with Vincent Cho, a Professor at Boston College in the Department of Education Leadership and Higher Education.  

We recently presented at UCEA about how this community could act as a spring board and resource for education technology leaders. The presentation was met with positive response, as there is no definitive online space for education leaders looking to discuss the integration of technology in education. Although the community does not have the high level of interaction that we seek at a human, or academic level, we feel confident that the members are learning, sharing and interacting towards the progress of technology in education. As we analyze and review the interactions that take place, we are limited by the closed API for Google+ Communities, which hinders our ability to systematically make changes based on member behavior and postings. If we could measure the interaction and content in which members share, we could better gauge the level of interaction and shape the conversation beyond what many consider to be a 'link dumpster'.  As we continue down the path of Education Leadership in Technology we will always find ways to curate ideas, innovations and technologies to promote education. Google+ Communities stands a chance to make a significant dent in organizing ideas, thoughts and people from across the globe around one idea. With significant user interaction modifications and moderator controls, the communities can act as a space where users can adequately and efficiently share ideas, links and ping the community for feedback just as they would when entering a room. For now, we are limited by the structural make-up and organizational layout of Communities. Without significant modifications, we will continue to see hollow-posts, interaction, self promotion and little engagement. 

What can we learn from hashtags as an indexing tool for scholarly research?

Do you feel the Internet is an easy to navigate space for developing a strong literature base for academic research? Or are your inquiries, scholarly or not, piece milled through a variety of sources,  channels and strategies? How could this process be improved? Could complex search inquiry strings be more comprehensive of information both in academic, social, literary, periodical and journalistic sources?

9592168333_6df5529553.jpg

An omnibus view of all content, regardless of channel could be made possible by the use hashtags, a type of metadata. If unfamiliar, a hashtag is any form of characters led by the “#” symbol. If standardized in scholarly, literary and journalistic sources, this shift could fundamentally change the way we consume and access content across the Internet. With its origins in the C programming language in the late 1970’s, to Internet Relay Chat(IRC) networks, the hashtag has now reached the point of common use in modern social networking platforms. Today, they are primarily used to categorize and index discussions, ideas, products, all represented by pictures, videos or text-based messages on social platforms. The popularity and use of the hashtag grew concurrently with the rise of twitter and can now be utilized across Google+, Facebook, Vine, Instagram and are typically standard features on new social platforms to promote sharing and interaction.

How could  the hashtag overcome the platform for which they are used? Will a platform or service be created that indexes hashtags based on one's interests, social or scholarly? Will this use of hashtags lead to the cannibalization of one platform over the other? It seems that the social media platform market has differentiated itself in terms of specific features, purpose and target audience. Beyond just the popular tags used in social media,  how could the hashtag potentially shift how we conduct academic research? Catapulting the scholarly integration and collaboration that could increase shared work in healthcare, business, science and education.  For now scholarly research is confined by closely guarded repositories that require subscriptions or affiliation with an academic institution. Regardless of the credentialing or monetary gains from scholarly research, the indexing and categorization could be unified across each repository. This could be leveraged through a hashtag indexing protocol.  

Most of what advanced Internet users do today is curate their favorite content (academic or social). Real Simple Syndication (RSS) feeder tools, search filters, and content alerts & subscriptions allow for us to control the amount of waste we encounter in a given browsing or research session.

Current hashtag curation tools help reduce the noise and feed to only the social content we want. It is not too far to say, with open API's and web services, the entire Internet and its contents could be indexed through hashtags. The difference between Google's indexing process and the hashtag metadata tagging process, is that the user (you) has the choice to create a filter for the contents of the inquiry. By using hashtags as a standard protocol for indexing topics, movements, ideas and conversations could categorized content regardless of the hosting  or retrieval platform. At this point, we can assume that as hashtags grow in use across content creators, this will open up as a filter to search by through Google search appliances. This could be as simple as a new ‘hashtag’ filter on the Google search platform in addition to ‘news’ and ‘web’.

With the advent of complex algorithms for finding and querying information on the Internet, google has developed and improved upon their algorithm, Hummingbird. Beyond Google's Hummingbird search algorithm, is Google Scholar, which indexes scholarly literature across a variety of sources. How could this be layered with the indexing power of the hashtag? If scholarly publishing platforms and services adopted a standard tagging platform like hashtags, this could provide researchers with a broader and more comprehensive view of the content and topics across all mediums, scholarly, social, media and journalism.

Serious attention has been given to the Internet of Things (IoT), which are objects and virtual products that connect to the Internet.  The realization that storage, connectivity, bandwidth, and hardware are becoming so small and cheap that anything that could be connected,  will be connected to a network. How will the connectedness of devices and humans change the way we search, aggregate, and consume content through indexing practices like hashtags? This could be the common denominator or standard in which content, regardless of the medium, is searchable and indexed across the Internet.

The implications for the use of hashtags to index, organize and accesf research and scholarly work are boundless. Teachers, researchers, and students have the potential to source content outside of the traditional channels, gaining a wider perspective on a given topic. The game changer takes place when hashtag are standardized by academic authors, journals, publishers, repositories, and educational resources. This standardization has the potential to reach new audiences in real time as opposed to historical search methods like boolean search, RSS feeds, search filters and aggregation platforms. Tools like TagBoard, could expand upon social platforms and provide services to aggregate and sort literature, articles, books, blog posts, taking the hashtag beyond it’s current use for sorting and indexing social activity. If implemented and adopted across channels by authors and publishers, this could create a greater picture for administrators, educators, researchers and learners to sort, index, and consume information specific to their domain of learning or research.  

 Image Credit: Flickr user Theo La


Research Memo

As I begin to dive a bit deeper into my research,  I want to be a bit more descriptive and purposeful in my approach to tackling issues in my area of interest, marketing, communications and technology in higher education.  

This is a great exercise for any researcher or practitioner trying to find their are of interest, target their passions or map out their area of focus. 

As a young person considering my path in life, I was always fascinated with the human cognitive condition as it relates to consumer behavior. The reasons, motivations and factors as to why we make the decisions we do to convert and purchase a product, or even make a decision.  In undergrad, I focused my research and studies on consumer behavior and domestic marketing practices in terms of digital experiential conditions. As I moved on to graduate school, I wanted to get a better understanding of the systemic changes in education, so I focused on distance learning and researched how instructional systems design was changing the landscape for higher education institutions. As I transitioned into a full-time staff-role in higher education, I quickly assimilated these skill in an enrollment management unit, which was a natural fit for exploring the behavior of a prospective student. Taking my passion and ability in understanding technical needs and forecasts of a unit seeking to grow and sustain enrollment, I found a space where I could blend my understandings of marketing, consumer behavior and technology in education. These together, created a exploratory outlet to look into ways new students could connect, apply and enroll in terms of the institutions digital presence.  If I were to explore and connect with one subject it would be the practices, disciplines and trends in higher education marketing and what factors affect institutional decisions as it relates to goals and strategies. This early in my inquiry, I will assume that the empirical and academic knowledge base in this area as it relates to educational institutions is very limited. With this assumption, comes a fear that there is little interest from an academic and scholarly perspective on this particular topic. However, my assumption (and apprehension) could be reinforced by the changing climate of higher education as a tuition driven enterprise, which hinges on enrollment and matriculation. The knowledge and scholarly base exists in the business and management fields, I would simply need to find an opportunity to intersect these areas.

I suspect my initial goals would be to explore the literature in terms of marketing in higher education publications and research but not limiting my queries to higher education enterprises.  Identifying the top scholars in this arena would also facilitate a better understanding of the leaders in the field and what their concerns and interdisciplinary focus might be. Framing this research as it applies to leadership practices, will position my research to be in line with my current course work and prescribed literature.

 feel there is a general disconnect within educational organizations in which administrators, program directors and faculty understand the value in public perception/awareness and the means to improve, shape or change these assumptions. Whether the general public or prospective students, I would seek to uncover the importance this has with the future of an academic institution.  Helping to alleviate this disconnect by informing leaders and educators on the practices in which they can advance their program, school or institution is an opportunity for research and exploration.

Through this exploration, I have identified three goals that will drive my research:

1.  Why is it important that a school, institution or program worry about public perception and awareness?

2. How is online education and technology changing the way students access and find educational institutions or programs?

3.  What are the practices, standards and innovative happenings in the field of educational marketing?

I believe that I have a unique advantage that I currently work within an institution in a department concerned with the marketing and communications for the enterprise.  I have a specific background and interest in consumer behavior strategies and research, along with the technical acumen to peer ahead into the future of this field. My goals are very broad, but I believe they are in line with the interest of potential educators, researchers and administrators. Additionally, I feel I understand the meaning on behalf of the research subjects in a study. That is, the cognition, affect, intention and their overall perspective on matters in which I would inquire (Maxwell, 2005, 22).

Conversely, I recognize the disadvantages inherent in my pursuit to uncover these questions through research. Some participants and constituents might not find an overall need or value in uncovering answers about their institutions public perception and awareness. Another possible disadvantage for further research could be the lack of literature base and foundation as it relates to scholarly inquiries in higher education marketing.

Reference

Maxwell, J.A. (2005).  Qualitative research design:  An interactive approach (2nd ed).  Thousand Oaks, CA:  Sage.