Featured

Database Write-Back Tableau Extension: How to Write the Code

The purpose of this series of articles is to help the reader understand each step necessary to implement a Tableau extension that will write to a database. The reader should gain a base understanding from which to move on from this example. The target reader is a corporate power-user, i.e. a smart person with technical aptitude who isn’t necessarily a professional developer.

There are lots of other great Tableau extension how-to articles out there (tableaumagic, tableaufansgunningfortableau, Tableau samples), but they each assume prior knowledge of base skills. In contrast, the goal of this series articles is to explain each technical leap in detail. This article series may try your patience if you already understand modern JavaScript web app technologies, so you might want to just grab the code from github and start from there, or hit the other blogs above.

Feedback welcome of course.

– Larry

DB Write-Back Part 6: Epilogue

As I wrote in part 5, the db.js file is the place where you will make changes if you need to write back to a database other than SQL Server. I intend to write code for a few databases, but for now here are some potential starting points:

Using Tableau Prep’s new Python integration to predict Titanic survivors

In the latest release of Tableau Prep Builder (2019.3), you can now run Python scripts from within data prep flows. This article will show how to use this capability to solve a classic machine learning problem.

Kaggle.com, a site focused on data science competitions and practical problem solving, provides a tutorial based on Titanic passenger survival analysis:

The sinking of the RMS Titanic is one of the most infamous shipwrecks in history.  On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.
One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.
In this challenge, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.

https://www.kaggle.com/c/titanic/overview/description

I decided to accomplish “data grooming” steps for this workflow in the Preppiest (rather than the most Pythonic) way, wherever possible because it seems like that’s what most Prep users would do in real life. The steps followed in this article closely mirror those in the kaggle Titanic tutorial. This article will be focused on Prep + Python, not on data science / machine learning / Python best practices.

The most famous Titanic passengers, Kate and Leo, don’t seem to be on the passenger list. Still, we can use this exercise to figure out if the movie’s end was statistically predictable. We know a few things about them. She’s rich, and a first class passenger, he’s poor and a third class passenger. She’s female, he’s male. They’re both relatively young, but not children. In data science these are called features and part of the process is figuring out which features have the most influence on the outcome.

You can find the finished prep flow file, along with the python files, on github here. Assuming you already have Tableau Prep >= 2019.3.1 installed, follow the instructions on the TabPy github site to set up TabPy.

How does Prep work with Python?

The author of a Tableau Prep flow adds a script step, then configures the settings of that step to point to the TabPy server, open the appropriate .py file, and call the appropriate function within that file.

configuration of a Python script step

The Python code within that file defines the called function, which receives a pandas dataframe from Tableau Prep (think of a dataframe as a simple spreadsheet in your computer’s memory), does something with it, and returns a dataframe. If the return dataframe is different from the received dataframe, the author must write a second function called get_output_schema. This tells Tableau Prep what column names and datatypes to expect. The available datatypes are:

  • prep_string()
  • prep_int()
  • prep_decimal()
  • prep_bool()
  • prep_date()
  • prep_datetime()

For example, the Python file below receives a dataframe from Tableau Prep and uses the pandas groupby function to show the mean survived score by passenger class. As you see by the get_output_schema function, this Python script only returns two columns to Prep, an int called Pclass and a decimal called “Survived”. It appears that the classes are reversed, as the richest passengers, the people with the highest survival chances, are in Pclass 3.

def pclass_survival(df):
	train_df = df.query('TestData == 0')
	return train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False)
	
def get_output_schema():
    return pd.DataFrame({
		'Pclass':prep_int(),
		'Survived':prep_decimal(),
    })
the output of pclass_survival as seen in Tableau Prep Builder

Yeah, that trailing comma in get_output_schema bothers me too. It doesn’t seem to matter.

An essential step to debugging code is being able to print the value of a variable. We can do this here using the console we used to start TabPy. I’ll insert a print statement into the pclass_survival function above, then observe the output in the TabPy console.

def pclass_survival(df):
	train_df = df.query('TestData == 0')
	print(train_df.tail()) # output for debugging
	return train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False)
output of a Python print statement in the console

If you have the console visible while you are working on your Prep flow, you may be alarmed to see Prep calling your Python scripts repeatedly. What Prep is actually doing here is calling the get_output_schema function in your Python scripts to find out if anything has changed. Unless you make a change to your .py file, the Tableau Prep flow won’t repeatedly run your main function. As it was explained to me by Arthur Gyldenege, a Senior Manager of Product Management at Tableau:

We cache intermediate results if nothing has changed in the scripts or in one of the previous steps.
In the console you should be able to see that as TabPy actually outputs what is being executed. The difference between execution and schema computation is the last couple of lines. In the first case, it will make a call to your execution method, in the second one, it’ll call get_output_schema.

Building the Titanic analysis Prep flow

Kaggle breaks the Titanic passenger list into test and train comma separated value (csv) files. The training file includes a populated “Survived” column, whereas the testing file does not. Because Python scripts called by Prep can only consume and return a single dataframe, and we’ll need access to both datasets in our final prediction Python script, we will need to union the test and train data as our first step in Prep. To enable Python to tell the difference, we’ll add a “test” flag column.

To separate the datasets within each of our Python scripts, we use the pandas “query” function to derive a training pandas dataframe, like this:

def predict_survival(df):
   train_df = df.query('TestData == 0')
   ...

We will be using a few machine learning algorithms imported from the https://scikit-learn.org/stable/ library. These algorithms don’t accept continuous values or strings, so we have some work to do to transform this data set.

Age may be an important factor in predicting survival, but we have a problem. There are 86 rows in the training data with no age data. Let’s complete that data by filling those nulls with the average age for passenger sex and passenger class. We could do that with a Python script, but we can also do it two steps with Prep. The two steps are labeled “average age by sex and pclass” and “clean age” in the finished Prep flow.

We also need to transform several continuous columns into “buckets” or “bins.” I found this to be inconvenient in Python because the binning function used in the tutorial manually takes the bin threshold values and creates bins. This isn’t viable for any sort of long-term automated and maintainable solution.

train_df['AgeBand'] = pd.cut(train_df['Age'], 5)

	AgeBand	Survived
0	(-0.08, 16.0]	0.550000
1	(16.0, 32.0]	0.337374
2	(32.0, 48.0]	0.412037
3	(48.0, 64.0]	0.434783
4	(64.0, 80.0]	0.090909

for dataset in combine:    
    dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
    dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
    dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
    dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
    dataset.loc[ dataset['Age'] > 64, 'Age']

The Tableau Prep way to do this uses two steps. First, an aggregate step adds a max_age column to the flow.

Then, a cleaning step adds the below “AgeBands” calculated field. The “/4”, “/4*2”, “/4*3” operations serve to divide the data into quartiles in an understandable way:

IF [AgeAugmented] <= ([MaxAge]/4) THEN 0

ELSEIF [AgeAugmented] > ([MaxAge]/4) AND [AgeAugmented] <= ([MaxAge]/4*2) THEN 1

ELSEIF [AgeAugmented] > ([MaxAge]/4*2) AND [AgeAugmented] <= ([MaxAge]/4*3) THEN 2

ELSEIF [AgeAugmented] > ([MaxAge]/4*3) THEN 3

END

On the chance that the passengers title is statistically significant, we need to group the various titles into five. In Python, this step looks like:

for dataset in combine:
    dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\
 	'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')

    dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
    dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
    dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')

… but in Prep, it’s a simple group and replace within a cleaning step.

The title survival.py script enables analysis of the survival rates by title, again using the Python groupby mean operation.

def title_survival(df):
	train_df = df.query('TestData == 0')
	return train_df[["Title", "Survived"]].groupby(['Title'], as_index=False).mean().sort_values(by='Survived', ascending=False)
	
def get_output_schema():
    return pd.DataFrame({
		'Title':prep_string(),
		'Survived':prep_decimal(),
    })

… and the result looks like this in Prep:

Optimistic news for Kate, but it’s not looking good for Leo. If Captain Smith had married them, both of their chances would have improved.

One common step when doing this sort of analysis is “feature engineering”, where you combine aspects of multiple features to improve the accuracy of the prediction. In this case we are going to create a new column called “family size” by combining the “sibling/spouse” (sibsp) and “parent children” (parch) columns. The Python function for this couldn’t be simpler:

def family_size(df):
    df['FamilySize'] = df['SibSp'] + df['Parch'] + 1
    return df

This function will add the “FamilySize” column to the dataframe. Since the dataframe we are returning is different, we need to describe the dataframe being returned with a get_output_schema() function. Note the FamilySize column has been added with an int datatype:

def get_output_schema():
    return pd.DataFrame({
        'TestData': prep_int(),
        'AgeBands': prep_int(),
        'Embarked': prep_string(),
        'Pclass': prep_int(),
        'Survived': prep_int(),
        'Parch': prep_int(),
        'SexNum':prep_int(),
        'TitleNum': prep_int(),
        'Title': prep_string(),
        'Name': prep_string(),
        'SibSp': prep_int(),
        'Fare': prep_decimal(),
        'FamilySize': prep_int(),
        'Sex': prep_string(),
        'Age': prep_decimal(),
    })

Evaluating algorithms

The author of the blog post on which I’m basing this article documents eight different algorithms. We’ll try three here: logistic regression, support vector machines and random forest. In the logistic regression code, we’ll not only run lr, but we’ll output the feature correlation as well:

def log_reg(df):
    from sklearn.linear_model import LogisticRegression
    logreg = LogisticRegression()

    train_df = df.query('TestData==0')
    test_df = df.query('TestData==1')

    # move the survived column from the training data over to a separate dataset 
    # because it's the dependent variable we need to pass to the algorithm
    X_train = train_df.drop("Survived", axis=1)
    Y_train = train_df["Survived"]

    # drop the "survived" column from the test data since it's empty
    X_test  = test_df.drop("Survived", axis=1).copy()

    logreg.fit(X_train, Y_train) # tell the logistic regression algorithm to build 
        # a model based on the training data
    Y_pred = logreg.predict(X_test) # predict survival on the test data

    # output the accuracy of the algorithm
    acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
    print(acc_log)
    lr_score = pd.DataFrame(data={'algorithm':['Logistic Regression'],'score':[acc_log]})

    # find coefficient for each feature
    coeff_df = pd.DataFrame(train_df.columns.delete(0))
    coeff_df.columns = ['Feature']
    coeff_df["Correlation"] = pd.Series(logreg.coef_[0])
    coeff_df.sort_values(by='Correlation', ascending=False)
    print(coeff_df)

    return lr_score
    

def get_output_schema():
    return pd.DataFrame({
        'algorithm': prep_string(),
        'score': prep_decimal(),
    })

To step through the code above, first we divide the incoming dataframe into test-and-training dataframes, as discussed toward the top of this article. Then we move the dependent variable, “survived”, over to a separate dataframe because that’s what the algorithm needs. We then call the “fit” function, which tells the algorithm to build a prediction model based on the training data. Next, we call the “predict” function, telling the algorithm to predict survival based on the test data.

Acc_log is an accuracy score. We’ll ask for one of those from each of the algorithms.

accuracy scores for each algorithm

coeff_df is a dataframe we build containing a row for each feature and an estimate of how influential each feature is on the dependent variable – survival. Sex is most important, followed by wealth (fare) and title.

Correlation of features to survival

Predicting Survival

Based on the evaluations of these few algorithms , it makes sense to use random forest. In real life a data scientist might evaluate more algorithms and combinations of algorithms to find the most accurate prediction.

def predict_survival(df):
    from sklearn.ensemble import RandomForestClassifier

    train_df = df.query('TestData==0')
    test_df = df.query('TestData==1')

    X_train = train_df.drop("Survived", axis=1)
    Y_train = train_df["Survived"]

    X_test  = test_df.drop("Survived", axis=1).copy()

    random_forest = RandomForestClassifier(n_estimators=100)
    random_forest.fit(X_train, Y_train)
    Y_pred = random_forest.predict(X_test)
    
    X_test['survive_pred']=Y_pred
    return X_test

def get_output_schema():
    return pd.DataFrame({
        'EmbarkedNum': prep_int(),
        'Pclass': prep_int(),
        'FamilySize': prep_int(),
        'AgeBands': prep_int(),
        'SexNum':prep_int(),
        'TitleNum': prep_int(),
        'IsAlone': prep_int(),
        'Fare': prep_decimal(),
        'FareBin': prep_int(),
        'survive_pred': prep_decimal(),
    })

This code is exactly the same as the linear regression code we used in the example above. Rather than calling the LR library, this code calls the random forest classifier. It calls the fit function, then the predict function, saving the result to a new column on the test dataframe. That test dataframe is then returned and we can output it as the result of our flow:

Summary of Titanic Learnings

Tableau Prep can now make use of Python scripts. Getting it to work is pretty straight-forward, with a few technical details that we have covered in this article.

Feedback on this article is encouraged and always welcome. Thanks for reading, and for using Tableau!

Acknowledgments

The shoulders on which I stand in writing this article include Joshua Milligan, bashii-Iwahashi Tomohiro (thank you Google/Chrome for translating), and Tom.

DB Write-Back Part 5: Putting it All Together to Write a SQL Azure Write-Back Extension

In the first four parts of this series, we incrementally built up an understanding of the technologies, libraries, and concepts needed to author a Tableau Extension that writes to a database. To wrap it all up, we will now build an extension that will persist a user’s stack-ranking of a list of items to a SQL Azure table. The name of the user, name of the item, and numeric rank will be inserted.

The working code for part5 is checked in on the Tabblogs repository on github.

First, we’ll generate a new skeleton for our project in the same way we did for part 4:

  • In a command prompt at your root directory, type “express –no-view part5”. This is assuming you have Express installed from earlier articles.
  • navigate to the part5 directory
  • type “npm install” to initialize the environment
  • Confirm that everything is working:
    • type “npm start” to start the web server with the new environment
    • browse to http://localhost:3000. You should see “Welcome to Express”.

We have some work to do on index.html! We will need a place to show the user the items they have selected and a way for them to express their stack-rank of those items.

  • Add a script source tag for Tableau Extensions.
  • Add a script source tag for JQuery.
  • Add script source tags for datatables. Datatables controls make attractive and functional table controls very easy to build.
  • Add a reference to the mark selection style sheet.

The head section of index.html:

<head>
  <title>TabBlogs.com DB Write-Back Tableau Extension: Soup To Nuts</title>
  <link rel="stylesheet" href="/stylesheets/style.css">
  
  <!-- Tableau extension -->
  <script src="tableau.extensions.1.2.0.js"></script>
  
  <!-- jQuery -->                                                    
  <script src="https://code.jquery.com/jquery-3.2.1.min.js"></script>

  <!-- Copied from Tableau Extensions Tutorial -->
  <!-- Required meta tags -->
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
  <!-- These two libraries are copied from the download builder here: https://datatables.net/download/ -->
  <link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/v/bs-3.3.7/jq-3.2.1/jq-3.2.1/dt-1.10.16/b-1.4.2/b-colvis-1.4.2/cr-1.4.1/fh-3.1.3/r-2.2.0/sc-1.4.3/datatables.min.css"/>
  <script type="text/javascript" src="https://cdn.datatables.net/v/bs-3.3.7/jq-3.2.1/jq-3.2.1/dt-1.10.16/b-1.4.2/b-colvis-1.4.2/cr-1.4.1/fh-3.1.3/r-2.2.0/sc-1.4.3/datatables.min.js"></script>
  <!-- Include the rowReorder datatables extension. (https://datatables.net/extensions/rowreorder/) -->
  <script type="text/javascript" src="https://cdn.datatables.net/rowreorder/1.2.5/js/dataTables.rowReorder.min.js"></script>
  <!-- Custom styling from Tableau Extensions Tutorial -->
  <link rel="stylesheet" href="./stylesheets/MarksSelection.css">

  <!-- references to our code -->
  <script src="./javascripts/part5.js"></script>
  <script src="./javascripts/appDB.js"></script>
</head>

The body section of index.html (below) contains five div regions. The outermost is just there to apply the “container-fluid” bootstrap format to everything. The div id “selected_marks” is not used. There’s a caption formatted as h4 with an id of “selected_marks_title”. We will use that id in part5.js to write the name of the “Sale Map” worksheet. The Tableau Extensions tutorial on which this html is based allows the user to select which worksheet they want to use. I’ve pulled that functionality out to simplify this page.

The next div has an id of “data_table_wrapper”. This is where we will place the datatable in our JavaScript in part5.js. The next div called “no_data_message” will only show if there is no datatable because the user has not selected any marks on the sale map.

We then have one last div section where we obtain the user’s name (id userName) to persist to the database and give the user a button to push when they have finished selecting and ranking. This button has an onclick tag with a value of “postRank()”. This means that the postRank function, over in appDB.js, will be called when the user pushes the button.

<body>
    <div class="container-fluid">
      <div id="selected_marks">
        <h4>
          Favorite states from <span class="sheet_name" id="selected_marks_title">Sheet Name</span>
        </h4>
        <div id="data_table_wrapper"></div>
        <div id="no_data_message">
          <h5>No marks selected. Please select your top five favorite states.</h5>
        </div>
      </div>
      <div>
        <label>What's your name? </label>
        <input type="text" id="userName" placeholder="Your name here"></input>
        <button id="submitRank" onclick="postRank()">Submit My Ranking</button>
      </div>
    </div>
</body>

Add a part5.js file to the javascripts directory. I’ve re-used and simplified some code from the Tableau Extensions Tutorial to pull data from the selected marks in the “Sale Map” worksheet and display it in a datatable control.

All of the code in part5.js is surrounded by an empty function. This common practice isolates the scope of the variables and functions within its scope, avoiding the risk of contaminating the JavaScript global namespace in our browser instance, potentially overwriting something we did not know existed. Part5.js is just going to load and run when index.html runs, so there is no need for these functions to be callable from outside of the file.

The code in part5.js is copied from the Tableau Extensions Tutorial, except for the “rowReorder: true” attribute of the data_table. This attribute actives the “dataTables.rowReorder.min.js” library that we pulled in in the index.html head section, and enables the user to drag and drop table rows to reorder them. It’s a little fiddly, only allowing a row to me moved up or down one row at a time, but it’s a useful feature.

(function() {
    //load tableau extension
    tableau.extensions.initializeDialogAsync().then(()=> {
    loadSelectedMarks("Sale Map");
  });
     
  // This variable will save off the function we can call to unregister listening to marks-selected events
  let unregisterEventHandlerFunction;
  function loadSelectedMarks (worksheetName) {
    // Remove any existing event listeners
    if (unregisterEventHandlerFunction) {
      unregisterEventHandlerFunction();
    }
    
    // Get the worksheet object we want to get the selected marks for
    const worksheet = getSelectedSheet(worksheetName);

    // Set our title to an appropriate value
    $('#selected_marks_title').text(worksheet.name);

    // Call to get the selected marks for our sheet
    worksheet.getSelectedMarksAsync().then( (marks) => {
      // Get the first DataTable for our selected marks (usually there is just one)
      const worksheetData = marks.data[0];

      // Map our data into the format which the data table component expects it
      const data = worksheetData.data.map((row, index)=> {
        const rowData = row.map((cell) => {
          return cell.formattedValue;
        });

        return rowData;
      });

      const columns = worksheetData.columns.map( (column)=> {
        return { title: column.fieldName };
      });

      // Populate the data table with the rows and columns we just pulled out
      populateDataTable(data, columns);
    });

      // Add an event listener for the selection changed event on this sheet.
    unregisterEventHandlerFunction = worksheet.addEventListener(tableau.TableauEventType.MarkSelectionChanged, (selectionEvent) => {
      // When the selection changes, reload the data
      loadSelectedMarks(worksheetName);
    });
  }

  function populateDataTable (data, columns) {
    // Do some UI setup here to change the visible section and reinitialize the table
    $('#data_table_wrapper').empty();

    if (data.length > 0) {
      $('#no_data_message').css('display', 'none');
      $('#data_table_wrapper').append(`<table id='data_table' class='table table-striped table-bordered'></table>`);

      // Do some math to compute the height we want the data table to be
      var top = $('#data_table_wrapper')[0].getBoundingClientRect().top;
      var height = $(document).height() - top - 130;

      // Initialize our data table with what we just gathered
      $('#data_table').DataTable({
        data: data,
        columns: columns,
        autoWidth: true,
        authHeight: true,
        deferRender: true,
        scroller: true,
        scrollY: height,
        scrollX: true,
        dom: "<'row'<'col-sm-6'i><'col-sm-6'f>><'row'<'col-sm-12'tr>>", // Do some custom styling
        rowReorder: true
      });
    } else {
      // If we didn't get any rows back, there must be no marks selected
      $('#no_data_message').css('display', 'inline');
    }
  }

  function getSelectedSheet (worksheetName) {
    // Go through all the worksheets in the dashboard and find the one we want
    return tableau.extensions.dashboardContent.dashboard.worksheets.find((sheet)=> {
      return sheet.name === worksheetName;
    });
  }
})();

AppDB.js is a library containing two functions: postData and postRank. postData is almost identical to the fetchData function from appDB.js of part4. postRank is the function called when the user presses the button on index.html. The first line of postRank takes the list of state names from the first column of the datatable that was created by part5.js and places them neatly into an array. The next line declares an object (called postObject) that we will need to pass our data through the fetch in postData. We also declare an array called insertData to store the data set we are going to build and pass to the database code.

The for loop in PostRank will iterate through the array of statenames, filling out a new array containing three name/value pairs for each row of data to be inserted into our database table. We then place our new array into the object, use the built-in function JSON.stringify to prepare it to be sent through a fetch, and call postData with the stringified object.

Copy AppDB.js from part4 and modify it as below, or bring it in from the github part5 folder.

let stateList = [];

postData = (insertData) => {
    return new Promise( (resolve, reject) => {
    fetch("/postData", {
      method: "POST",
      headers: {
        "Content-type": "application/json"
      },
      body: insertData
    })
      .then(function(response) {
        const r = response.json();
        return r;
      })
      .then(function(myJson) {
        $('#result').text(JSON.stringify(myJson));
        return(myJson);
      })
      .catch(function(err) {
        if (err) {
          throw err;
        }
      });
    });
}

function postRank(){
  var stateNames = $('#data_table').DataTable().columns(0).data().toArray()[0]; //push the first column of the html table into an array
  var postObject = {};
  var insertData = []; 
  const name = $('#userName').val();

  for(i=0;i<stateNames.length;i++){
    const state = stateNames[i];
    const index = i+1;
    insertData.push({userName:name});
    insertData.push({stateName:state});
    insertData.push({stateRank:index});
  }

  postObject.data = insertData;
  postData(JSON.stringify(postObject));
};

DB.js is modified from part4 in order to format the data passed from AppDB.js into a TSQL INSERT statement.

const sql = require('mssql');

// Enter your own credenials in the credentialsExample.js and rename it to credentials.js
const credentials = require("./credentials.js");

exports.insertData = (req, res)=>{  
    let column = [];
    let values = [];
    
    let data = req.body.data;
  
    let jsondata = JSON.stringify(data);
    JSON.parse(jsondata, (key,value)=>{
        switch(key){
            case "userName":
                values += "('"+value+"',"
                break;
            case "stateName":
                values += "'"+value+"',"
                break;
            case "stateRank":
                values += "'"+value+"'),"
                break;
        }
    });

    valueArray = values.substr(0,values.length-1);
  
    let insertQuery = `INSERT INTO ranking.dbo.states (userName,stateName,stateRank) VALUES ${valueArray};`;
  
    //console.log(insertQuery);
  
    //send the insertQuery
    new sql.ConnectionPool(credentials.config)
      .connect()
      .then((pool)=> {
        return pool.request().query(insertQuery);
      })
      .then((result)=> {
        console.log(result);
        sql.close();
      })
      .catch((err)=> {
        if (err) throw err;
        sql.close();
      });
    res.end(); 
  };
  
  function addQuote(val) {return val.length ? "'" + val.join("','") + "'" : "";
}

module.exports = exports;

You will need a credentials.js file containing your SQL Azure credentials. Rename the credentialsExample.js file and edit it with your information. The state.sql file contains a CREATE TABLE script to build the simple schema you will need.

With index.html creating the UI of our extension, app.js establishing the Express routes that we need, part5.js reading the selected marks from the dashboard and creating a datatable in our extension UI, appDB.js reading the sorted table and creating a data set with which to call SQL Azure, credentials.js supplying your credentials and db.js doing the database call, everything is in place.

Select a few of your favorite US states, sort your table by something other than the state name, then you will be able to drag and drop the rows of the table according to your preference of US states. Enter your name, then hit the “Submit My Ranking” button. Go query your SQL Azure table and you should see:

DB Write-Back Part 4: Build a Standalone JavaScript Program to Query SQL Azure

Querying a database from JavaScript requires us to employ a few new concepts and a library or two. JavaScript was initially designed to render HTML in browsers, but the language has been augmented over time to be capable of doing work on servers as well. Connectivity to a database server is asynchronous and must be executed on the server side.

Before we build a write-back extension, we will build a stand-alone JavaScript web app that queries a SQL Server. This should provide the reader with a simplified code base to build understanding. Once we have that nailed down, we will bring it into our extension.

Configuring Local Machine Infrastructure

To set up the infrastructure specific to this article on your local machine, take the following steps:

  • Create a new part4 directory under the project directory from part1-3.
  • Open a command prompt (terminal) in the new directory.
  • From the command prompt runnpm install express-generator -g

    The -g flag means to do a global rather than local installation. Express generator will quickly create an application skeleton for us, saving a lot of time building files and directories from scratch.

  • Type “express –no-view dbjs” (those are two dashes preceding no-view) … at the command prompt. Express will create a directory called dbjs and a skeleton project within the new directory.

  • Stop the node.js server process that you were running for part 3. We will start a new web server below, and it will source its web content from a different directory structure.

  • Follow the instructions that Express provides (this is for Windows; other platforms will be slightly different):

    change directory:

    > cd dbjs

    install dependencies:

    > npm install

    run the app:

    > npm start

  • Now open a browser to http://localhost:3000 . You should see a simple “Welcome to Express” page.

  • Close the browser page and CTRL+C in the command line to stop the node web server you’ve just tested.

  • We’ll run our node server with nodemon so that we don’t have to restart it manually every time we make a change to our server side code, so in the command line, run:

    npm install nodemon -g

  • We’ll need it in a minute, so install the Microsoft SQL Server client for Node.js by runningnpm install mssql” from the command prompt. NPM (node package manager) is the “world’s largest software library” and this command line utility will install or update whatever libraries you need to use for your project.

We will use SQL Azure as our database. If you do not already have one, follow these instructions to set up a sample database. Don’t forget to create a firewall rule to enable connectivity to the Azure SQL instance from your current network. Also, don’t forget to save that firewall rule once you have it configured. Of course you can use a local or on-premise instance of SQL Server if you would prefer.

Adding Database Query Code

The necessary files can be copied from github to the directory we created in the infrastructure steps above. Place db.js and appDB.js in the javascripts subfolder of the public folder.

Edit the index.html file adding the following appDB.js script source line to the head and adding the following new div to the body.

<!--add this to the head section of index.html-->
<script src="./javascripts/appDB.js"></script> 
<!-- Add this to the bottom of the body of the index.html file-->
<div>
  <span id="result">This text should be replaced with data.</span>
</div>

So index.html will look like this:

<html>

<head>
  <title>Express</title>
  <link rel="stylesheet" href="/stylesheets/style.css">

  <!-- jQuery -->                                                    
  <script src="https://code.jquery.com/jquery-3.3.1.min.js"></script>
  
  <script src="./javascripts/appDB.js"></script>  <!-- NEW  -->
</head>

<body>
  <h1>Express</h1>
  <p>Welcome to Express</p>

  <!-- NEW  -->
  <div>
    <span id="result">This text should be replaced with data.</span>
  </div>

</body>

</html>

Either replace app.js with the one from github or add these lines of JavaScript above the export (last) line:

const db = require("./public/javascripts/db");
app.post("/getData", db.query);

App.js will look like this. All but two lines were automatically created by Express Generator:

var express = require('express');
var path = require('path');
var cookieParser = require('cookie-parser');
var logger = require('morgan');

var indexRouter = require('./routes/index');
var usersRouter = require('./routes/users');

var app = express();

app.use(logger('dev'));
app.use(express.json());
app.use(express.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(__dirname, 'public')));

app.use('/', indexRouter);
app.use('/users', usersRouter);

const db = require("./public/javascripts/db");
app.post("/getData", db.query);

module.exports = app;

Rename the credentialsExample.js file to credentials.js, and edit the file to add your credentials and Azure SQL database name.

Now start the app with “nodemon app.js at your command line in the directory of our new web app.

Browse to http://localhost:3000, and if all is configured correctly you should see this:

What have we done here?

The app.js file is used to configure our little web app. A route is used by Express to direct requests. App.js already had routes configured for the root (‘/’, when you browse to localhost:3000) and /users (as in localhost:3000/users) from our initial setup. We don’t have any code implemented to handle a request to /users, so nothing will happen at that url. We added a route for /getData, specifying that when it is called, the code in our db.js file is to be run.

When we browse to http://localhost:3000, our Node server responds by looking up the route for ‘/’, the root of our site, and passing that request to ./routes/index as configured in app.js. This causes the server to load and run ./routes/index.js, which renders index.html to the browser.

The browser receives the index.html document from the server and parses it. From the head section, it invokes appDB.js. The browser retrieves that file from the server and executes it. In the fetchData function of appDB.js, we see fetch(“/getData”… which tells the browser to call back (using a POST request as opposed to a GET request) to the server at the localhost:3000/getData path.

Back on the server line 21 of app.js declares what to do when a POST to /getData is received. It calls the query function in the db.js file. Our connection to the database is from code running on the web server, not from our browser. As you are getting used to this “running on the server” vs “running on the browser” idea, you might insert some console.log(“I’m doing this now”) statements throughout these javascript files. If the code is running on the server, the text will appear on the command line where you started the web server with nodemon app.js. If it is running on the browser, it will appear in the browser’s debug console (CTRL+SHIFT+I on Chrome in Windows), or in the extension debugger if you’re working with an extension.

The db.js file contains our database vendor specific functionality. The code comes from a sample on the mssql npm page. This is where you would swap out the SQL Server specific code for AWS, Google, Oracle, MySQL or other database connectivity code. In this file we are using the mssql package to create a connection pool, send a query, receive the result, and close the connection.

The query result from SQL Server is sent from our Node web server to the appDB.js code still running in our browser. The “.then” syntax below the fetch clause is a way JavaScript works with asynchronous processes like querying a database. The fetchData function parses the result set from JSON into text and updates the value of the “result” HTML tag. On my machine, I can watch the page render and a second later see the “This text should be replaced with data” replaced with our data. We are not worrying about formatting this data at this point, because lazy.

Summary

Writing to a database requires a JavaScript developer to learn a new set of server side skills. In this article we created a connection to and queried a SQL Azure table, outputting the results to a web page.

In part 5, we will take these concepts and add them to the extension concepts from part 3 to make extension that writes back to a database.

DB Write-Back Part 3: Add Tableau Dashboard and Workbook Name List to Extension UI

Let’s show the name of the dashboard and the names of the workbooks within the dashboard in our extension’s page.

Output the Dashboard Name

Create a part3 directory, and create part3a.html and part3.js files within it.

part3a.html:

<!DOCTYPE html>
<html lang="en">
  <head>
    <script src="../lib/tableau.extensions.1.latest.js" defer></script>
        
    <!-- jQuery -->                                                    
    <script src="https://code.jquery.com/jquery-3.2.1.min.js"></script>
    
    <!-- Reference our JavaScript file -->
    <script src="./part3a.js" defer></script>  
    
    <title>TabBlogs.com DB Write-Back Tableau Extension: Soup To Nuts</title>
  </head>
  <body>
    <p id="dashboardName"></p>
  </body>
</html>

In part3a.html, we’ve changed the body of the html page to contain just one element, an empty paragraph with the element id “dashboardName”. If we were to just run this html page without calling part3a.js, it would give us a blank page. We’ll use jQuery in our js file to assign a value to the dashboardName html element.

part3a.js:

tableau.extensions.initializeAsync().then(() => {
    const dashboard = tableau.extensions.dashboardContent.dashboard;
    
    //Use jQuery to select the html element #dashboardName and assign it the name of the Tableau dashboard.
    $('#dashboardName').text('Dashboard name: '+dashboard.name);
});

The constant declaration is simply creating a shorthand variable for the Tableau extension dashboard object.

The “$(‘#dashboardName’)…” line calls the jQuery function (abbreviated “$”) in order to get access to the “dashboardName’ html element in the part3a.html document object. Once we have access to the element, we assign it a string value by concatenating ‘Dashboard name: ‘ and the value of the name property of the dashboard.

Edit your trex file to refer to part3a.html file.

<source-location>
  <url>http:localhost:3000/part3/part3a.html</url>
</source-location> 

Re-load the extension, and you should see this:


Output the Workbook Names in an HTML Table

part3b.html:

<!DOCTYPE html>
<html lang="en">
  <head>
    <script src="../lib/tableau.extensions.1.latest.js" defer></script>
        
    <!-- jQuery -->                                                    
    <script src="https://code.jquery.com/jquery-3.2.1.min.js"></script>
    
    <!-- Reference our JavaScript file -->
    <script src="/part3/part3b.js" defer></script>  
    
    <title>TabBlogs.com DB Write-Back Tableau Extension: Soup To Nuts</title>
  </head>
  <body>
    <p id="dashboardName"></p>
    <table id="worksheets">
      <tbody></tbody>
    </table>
  </body>
</html>

In part3b.html, we’ve added a table with an element id of #worksheets and an empty body.

part3b.js:

tableau.extensions.initializeAsync().then(() => {
    const dashboard = tableau.extensions.dashboardContent.dashboard;
    
    //Use jQuery to select the html element #dashboardName and assign it the name of the Tableau dashboard.
    $('#dashboardName').text('Dashboard name: '+dashboard.name);
    
    dashboard.worksheets.forEach( worksheet => {
      $('#worksheets > tbody').append(`<tr><td>Worksheet: ${worksheet.name}<td><tr>`);
    });
    
});

In part3b.js, we have removed the “I am initialized” line and added a forEach loop that iterates through each worksheet in the dashboard. For each of the worksheets, we call the jQuery function to get access to the #worksheets table’s body. To that we append a new row containing the worksheet name. It’s tough to notice, but in order to use JavaScript ES6 standard template literals (that ${worksheet.name} placeholder in the string we’re appending), we need to wrap the string with backticks rather than single quotes. Absolutely maddening to replicate / debug if you don’t know that.

Edit your trex file to refer to part3b.html file.

<source-location>
  <url>http://localhost:3000/part3/part3b.html</url>
</source-location> 

Re-load the extension, and you should see this:

Formatting

Since the 20th century if not before, web developers have been using cascading style sheets (CSS) to format web pages. The 21st century has brought us Bootstrap, an open source toolkit that shortcuts many CSS formatting tasks.

part3c.html:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <script src="../lib/tableau.extensions.1.latest.js" defer></script>
        
    <!-- jQuery -->                                                    
    <script src="https://code.jquery.com/jquery-3.3.1.min.js"></script>
    
    <!-- Bootstrap -->
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.0/css/bootstrap.min.css" >
    <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.0/js/bootstrap.min.js" ></script>
    
    <!-- Reference our JavaScript file -->
    <script src="./part3c.js" defer></script>  
    
    <title>TabBlogs.com DB Write-Back Tableau Extension: Soup To Nuts</title>
  </head>
  <body>
    <p id="dashboardName" class="lead"></p>
    <table id="worksheets" class="table-bordered">
      <tbody></tbody>
    </table>
  </body>
</html>

Part3c.html includes two additional references, a bootstrap css file and a bootstrap JavaScript file. Within the body of part3c.html, we have added class=”lead” to the dashboardName paragraph tag in order to have bootstrap format it for us. We have also added the Bootstrap class table-bordered to the worksheets table.

There are no changes in part3c.js from part3b.js.

Edit your trex file to point to part3c and reload the extension to see the subtle effect that bootstrap has on the format of the extension.

Summary

The Tableau extensions dashboard object gives us programmatic access to the dashboard, worksheets and other aspects of the Tableau environment. This article has walked through the process of accessing some of those attributes and writing them to a web page.

In Part 4, we will step back from extension development and explore SQL Server connectivity from a JavaScript app.

DB Write-Back Part 2: Tableau Debug Mode & Debugger Command Line

Debugging is an essential part of writing any program. Tableau Desktop has a debug mode to enable us to understand what is happening with our extension code. To debug a Tableau extension, the first step is to start Tableau Desktop (first close it if you still have it open from Part 1) in debug mode. It is practical to create a desktop shortcut with this target (if you are using Windows):

“C:\Program Files\Tableau\Tableau 2019.2\bin\tableau.exe” –remote-debugging-port=8696

Obviously, you should replace my path with the path to your current version of Tableau desktop. You can also replace the port (8696) with any other unused port number on your machine.

Now, use your new shortcut to open Tableau Desktop in debug mode. Open the Superstore sample and the overview dashboard again, then open the extension from part 1. Now, open Chrome (or Chromium if you’re not yet using at least Tableau 2019.1, details here) and browse to http://localhost:8696.

Click on the inspectable page you see to start the debugger. Click into the “Console” tab. Any errors that have been raised from your extension will show up here. This console is also interactive. Type

console.log(“Hi!”)

… and the debugger will respond with an enthusiastic greeting. Now let’s try to explore the tableau API. To get a list of the worksheets in this dashboard, type:

tableau.extensions.dashboardContent.dashboard.worksheets

… and the response will be “tableau is not defined”. Woops. We haven’t yet created and initialized a reference to the Tableau Extensions library in our extension web app, so our web app can not yet use the Tableau Extensions API. Tableau doesn’t know what we’re talking about.

Let’s fix that.

Environment Setup

Example Code

The example code for this whole series of articles is posted to github at https://github.com/tabblogs/TableauDBWBExtensionsStN. Feel free to pull that whole thing down to your machine or just look there occasionally for clarity.

Code Editing

You will need an editor. I use Microsoft Visual Studio Code and recommend it. You can add extensions to customize and enhance your experience. You can open a terminal window to run your web server right there in the editor. Great stuff.

Setup Steps

In order to further explore Tableau extensions, you will need to set up a developer environment on your machine.

  • Either create a new directory on your machine, or use the directory you created for part 1. I put mine (on Windows) in …/documents/dev/extensions/.
  • Open a command prompt (terminal) in the new directory.
  • Install NodeJS and NPM. (Windows, Mac, Linux).
  • To verify your installation, type node -v and then npm -v at the command prompt. Node and npm should respond with their respective versions.
  • Install the express JavaScript library: type npm install express at the command prompt.

Create a file called app.js in the same directory. Paste the following JavaScript into app.js:

const express = require("express");
const app = express();
const port = 3000;

app.use(express.static('.'));

//start the server
app.listen(port, function() {
    console.log("Our server is running on " + port);
});

Express ” is a minimal and flexible Node.js web application framework that provides a robust set of features for web and mobile applications. ” We are using it now to configure a simple web server for our local use, and we will use Express in future articles to route requests within our web app.

Node.js is a Javascript run-time environment built on Chrome’s V8 Javascript engine. This article is a great introduction, and throws in some Express intro content as well.

The code in app.js creates an instance of Express, then configures a static route at the root directory of our project (the app.use… line). This means that it will serve anything that we ask for in a browser in the directory structure under our project root directory. The app.listen line starts our web server listening on the port we specified.

  • Start the web server by typing “node app.js” at the command prompt. It should respond with “Our server is running on 3000”
  • If you would like to test your configuration, copy part1.html into the root directory (or in a part1 directory if you would like). Browse to http://localhost:3000/part1.html, and you should see the web page from part 1.
  • Create a directory under the project root directory called part2.
  • Create a file in the part2 directory called part2.html.
<!— Part2.html -->
<!DOCTYPE html>
<html lang="en">
  <head>
    <!-- Tableau API library -->
    <script src="../lib/tableau-extensions-1.latest.js" defer></script>
    
    <!-- jQuery -->                                                    
    <script src="https://code.jquery.com/jquery-3.2.1.min.js"></script>

    <!-- reference our Javascript functions -->
    <script src="/part2/Part2.js" defer></script>  
        
    <title>Extension Quark</title>
  </head>
  <body>
    <p>Referencing the Tableau Extension JavaScript library.</p>
  </body>
</html>

The Part2.html file includes a reference to the Tableau extensions JavaScript library (the first script src=… line). The two periods in ../lib.. indicate that the directory is under the root directory. Download the Tableau Extension library files from https://github.com/tableau/extensions-api/tree/master/lib and place them in a new directory called lib under your root directory. (I have had better luck downloading the whole 337Mb zip file and pulling the libraries from there than downloading them individually.)

The “defer” attribute in the script source tag specifies that the script is executed when the page has finished parsing. Without that attribute, our script will run before the objects it needs to refer to are ready in the html document object model (DOM), and nothing will happen. Frustrating.

Our latest html file also contains a reference to jQuery, “jQuery is a fast, small, and feature-rich JavaScript library. It makes things like HTML document traversal and manipulation, event handling, animation, and Ajax much simpler with an easy-to-use API that works across a multitude of browsers.” Part2.html also contains a reference to part2.js, our first JavaScript file.

Create part2.js in the part2 directory. Part2.js contains just a few lines of code:

tableau.extensions.initializeAsync().then(
  () => {
    $('body').append('<p>Extension library initialized!</p>');
  }
);

Okay, buckle up. The first line initializes the Tableau Extensions API. We need to do that before we can interact with it in the debugger console or anywhere else. It calls the initializeAsync function on the extensions API to load and initialize. The “then” function uses the JavaScript Promises API to enable our code to wait until the Extension library is initialized, then perform an action. Everything we do in JavaScript having to do with extensions will happen within the set of curly braces {} after this “then” function. Also, any Extension API function with a name ending in “Async” must be called with a “then” function. The ()=> is a JavaScript shortcut for function(). The line of code with the {} calls the jQuery function (the $ is an abbreviation for jQuery) to get access to the ‘body’ element of our html page and append an html text paragraph announcing our success.

In order to use part2.html, change your trex file to point to part2.html rather than part1.html.

<source-location>
  <url>http://localhost:3000/part2/part2.html</url>
</source-location>

Here’s a tip to avoid some frustration. Anytime you make a change in a trex file, you need to close and reopen the extension in Tableau Desktop. If you find yourself making changes to your code that aren’t appearing in your extension, you might have changed your trex file and forgotten to close and reopen the extension. In cases where you’ve made code changes, but haven’t needed to change your trex, simply select “reload” from the “more options” down arrow in the controls of your extension window.

Choose reload to check on changes in code within your extension. If you have changed the manifest file (.trex), you will have to close (with the X) and reopen the extension.

Interacting with the Tableau Debug Console

Now, reload the tablogs_writeback.trex extension. Back in Chrome, you’ll need to re-open localhost:8696 because the extension we were debugging was closed.

Let’s ask the API how many worksheets are in the dashboard. Go back to the console at localhost:8696 in your browser and type:

tableau.extensions.dashboardContent.dashboard.worksheets.length

The Tableau extension API will reply (if you’re on the SuperStore Overview dashboard) with “4” because there are four worksheets in that dashboard.

Let’s find out the name of the first worksheet. Type:

tableau.extensions.dashboardContent.dashboard.worksheets[0].name

The Tableau extension API will reply (if you’re on the SuperStore Overview dashboard) with “Sale Map”.

Let’s find out how many individual objects Tableau counts in the dashboard. In the console, type:

tableau.extensions.dashboardContent.dashboard.objects.length

The Tableau extension API will reply (if you’re on the SuperStore Overview dashboard) with “16”.

What size is this dashboard? In the console, type:

tableau.extensions.dashboardContent.dashboard.size

What size is the “Sale Map” worksheet? In the console, type:

tableau.extensions.dashboardContent.dashboard.worksheets[0].size

This is just a peak into the information the Tableau extensions API can provide. Of course it also provides a set of functions (documented here) that enable us to manipulate the objects in the Tableau dashboard.

In part 3 we’ll start looking at the JavaScript that will make our extension a web app.

DB Write-Back Part 1: The Minimal Extension & Manifest File

Let’s start by building the minimal extension – the least possible code that we can call a Tableau extension.

Glitch.com is a site that enables JavaScript development with automated hosting of web pages and JavaScript apps. By using glitch to host part1.html, we’ll avoid setting up a desktop environment for this first baby step. Unfortunately, because the Tableau Extensions JavaScript library has grown past a key file size threshold, and because there is no official CDN for the Tableau extensions library at this point, this is as far as we can go with glitch.

Here’s part1.html:

<!DOCTYPE html>
<html lang="en">
  <head>
    <title>TabBlogs.com DB Write-Back Tableau Extension: How to Write the Code</title>
  </head>
  <body>
    <p>
      <b>Contratulations! </b>
      You've loaded your first extension!
    </p>
    <p>    </p>
    <p>The minimal extension is just a simple html page like this, but we are going well beyond this.</p>
    <p>This extension window is just a web browser using chromium version 61 (as of Tableau Desktop 2019.1).</p>
  </body>
</html> 

The Manifest

To instruct Tableau to open this extension, we will need a manifest file. This is a text file with a specific XML format and a “trex” filename extension. The required element details are:

  • For <dashboard-extension id=” “> use reverse domain name notation to uniquely identify the extension (com.example.extension.hello.demo)
  • For <source-location> make sure that this specifies the URL of your web application. You must use the HTTPS protocol. The exception to this requirement is localhost, where you can use HTTP. For example, if you created a HelloDemo folder and want to host the file locally on your computer using port 8765, you might use: http://localhost:8765/HelloDemo/HelloExtension.html
  • The <min-api-version> element that specifies the minimum version of the Extensions API library that is required to run the extension.
  • For <icon> you must use a Base64-encoded icon. To use the default icon, copy and paste the <icon> example here, or copy one of the manifest files (.trex) from the samples.
  • Provide the name for your extension (Hello Extensions!). The manifest file can be localized, so provide the name (or names) in the appropriate <text> elements in the <resources> section.

Given those details, create a directory in your local machine, and within that directory create an empty file called tabblogs_writeback.trex. Paste the following XML into that file. The file is also available at https://tabblogs-dbwriteback.glitch.me.

<?xml version="1.0" encoding="utf-8"?> 
<manifest manifest-version="0.1" xmlns="http://www.tableau.com/xml/extension_manifest">
  <dashboard-extension id="com.example.extensions.name" extension-version="0.1.0">
    <default-locale>en_US</default-locale>
    <name resource-id="name"/>
    <description>TabBlogs.com extension article series.</description>
    <author name="USERNAME" email="USER@example.com" organization="My Company" website="https://www.example.com"/>
    <min-api-version>1.0</min-api-version>
    <source-location>
      <url>https://tabblogs-dbext-part1.glitch.me/</url> 
    </source-location>
    <icon></icon>
    <permissions>
      <permission>full data</permission>
    </permissions>
  </dashboard-extension>
  <resources>
    <resource id="name">
      <text locale="en_US">name in English</text>
      <text locale="fr_BE">name in French</text>
      <text locale="de_DE">name in German</text>
    </resource>
  </resources>
</manifest>

Now to implement our first extension.

  1. Open Tableau desktop; open the superstore sample to the overview dashboard.
  2. Left click and drag the extension icon from the “Objects” area onto the dashboard. Notice that you can place it on any border between dashboard tiles, under, over or to the side of the whole dashboard, or in a floating tile.
  3. In the “choose an extension” dialog, choose “My Extensions”, navigate to your project directory and choose the tabblogs_writeback.trex file that you created above.
  4. The “Allow Extension” dialog is the only place where you have a chance to (or not to) grant execution permissions to an extension. Notice the four fields from the trex file that are shown here. Technically, our trex file doesn’t need the “permissions” tag, because we’re not accessing any of the APIs that require that tag… yet.
  5. If all has gone right, your extension will load, and you’ll see a simple html page within the extension tile:

Summary

The simplest possible extension is just a plain html web page. The manifest file containing the URL to that page is how Tableau knows what to load. In the next article we will look at the Tableau extension debugging environment and the API that we will use.