Publishing .NET Core Web API (FIDO2 WebAuthn Passwordless) to Azure

In the previous post, we created a FIDO2 and WebAuthn .NET Core WebApi as a separate project (back end) and a separate NextJs project as a front end app in order to log in with no password. The previous two projects ran on localhost. They can be found in this blog post:

In this blog post, we will prepare, and publish the .NET Core part project to Azure.

First of all, we have to store our MongoDb connection string in Azure Vault. We will need to create the following resources in Azure first, and in the following sequence as well. KeyVault will store the secret and the rest is also needed to publish the API and host it in Azure:

  • Resource Group
  • AppService Plan
  • Web App
  • KeyVault

Create Resource Group:

az group create --name yourchosennamerg --location "EastUs

Create an App Service plan

az appservice plan create --name yourchosenserviceplanname --resource-group yourchosennamerg  --sku FREE

Create a Web App

az webapp create --resource-group yourchosennamerg --plan yourchosenserviceplanname --name webappname

Create AzureKeyVault

az keyvault create --name "yourchosenkeyvaultname" --resource-group "yourchosennamerg " --location "EastUs"

Create managed identity

az webapp identity assign --name "yourwebappname" --resource-group "yourchosennamerg "

From the above commands, you will see the following returned:

{
 "principalId": "990e548b-a928-4f51-8c7e-9k75648h",
  "tenantId": "3a91548b-8c7e-a928-9ca5-3a91548b",
  "type": "SystemAssigned",
  "userAssignedIdentities": null
}

Grab the principalId and use it to assign the managed identity in order to give your web app permission to do get and list operations on your key vault, pass the principalId to the az keyvault set-policy command:

Assign Managed Identity

az keyvault set-policy --name "yourkeyvaultname" --object-id "990e548b-a928-4f51-8c7e-9k75648h" --secret-permissions get list

Now either from the code or inside Azure Portal, navigate to newly created KeyVault and add the MongoDb connection string as key value pair:

Alternatively, we can use PowerShell to create all the needed resources above right from command line. Copy the code below all of it into powershell and all the needed resources will be created:

#Connect to your Azure subscription if you are not already connected
Connect-AzAccount

# Create a resource group
New-AzResourceGroup -Name "rgpasswordless" -Location "EastUs"

# Create an App Service plan
New-AzAppServicePlan -Name "passwordlessserviceplan" -ResourceGroupName "rgpasswordless" -Location "EastUs" -Tier Free

# Create a web app
New-AzWebApp -ResourceGroupName "rgpasswordless" -AppServicePlan "passwordlessserviceplan" -Name "passwordlessapicorenext"

# Create a key vault
New-AzKeyVault -Name "PasswordlessKeyVault" -ResourceGroupName "rgpasswordless" -Location "EastUs"

# Assign a managed identity to the web app
Set-AzWebApp -AssignIdentity $true -Name "passwordlessapicorenext" -ResourceGroupName "rgpasswordless"

#Get PrincipalID
$principalId = (az webapp identity show --name "passwordlessapicorenext" --resource-group "rgpasswordless" | ConvertFrom-Json).principalId

#Set keyvaule access policy
Set-AzKeyVaultAccessPolicy -VaultName "PasswordlessKeyVault" -ObjectId $principalId -PermissionsToSecrets get,list

#Set the required secret in keyvault
$secretValue = ConvertTo-SecureString -String "YOURMONGODBCONNECTIONSTRING" -AsPlainText -Force
Set-AzKeyVaultSecret -VaultName "PasswordlessKeyVault" -Name "MongoDbConnString2" -SecretValue $secretValue

In the solution itself, and in terminal, bash or powershell, run the following commands to add the references to the API project in order to use KeyVault itself:

dotnet add package Azure.Identity
dotnet add package Azure.Security.KeyVault.Secrets

Then at the top of RegisterController, add using statements

using Azure.Identity;
using Azure.Security.KeyVault.Secrets;
using Azure.Core;

After this we can use the following code block to replace the MongoDb connection string with Azure Key Vault secret, which is considered as a global variable in Azure. We can also write a helper method or database access layer in our API and add this code once. For the purpose of this demo, I put all the code in a single file.

SecretClientOptions options = new SecretClientOptions()
    {
        Retry =
        {
            Delay= TimeSpan.FromSeconds(2),
            MaxDelay = TimeSpan.FromSeconds(16),
            MaxRetries = 5,
            Mode = RetryMode.Exponential
         }
    };
var client = new SecretClient(new Uri("https://PasswordlessKeyVault.vault.azure.net/"), new DefaultAzureCredential(),options);

KeyVaultSecret secret = client.GetSecret("MongoDbConnString2");

string secretValue = secret.Value;

The next step is to switch to our terminal inside VSCode and publish our API locally:

dotnet publish -c Release -o ./publish

This will create a publish directory inside our API. Then, right click on the publish folder and navigate to Deploy To Web App

In the output tab of bottm panel in VSCode you will see these lines at the end of deployment:

9:44:08 PM passwordlessapicorenext: Copying file: 'publish\publish\web.config'
9:44:08 PM passwordlessapicorenext: Deleting app_offline.htm
9:44:08 PM passwordlessapicorenext: Finished successfully.
9:44:08 PM passwordlessapicorenext: Running post deployment command(s)...
9:44:08 PM passwordlessapicorenext: Triggering recycle (preview mode disabled).
9:44:09 PM passwordlessapicorenext: Deployment successful.
9:44:17 PM: Deployment to "passwordlessapicorenext" completed.

Then, navigate to your Azure Portal, and to your subscription ofcourse, then click on your WebApp and copy the Default Domain value. In my case it is: passwordlessapicorenext.azurewebsites.net

Add https://, and add swagger at the end of URL, so it becomes like this: https://passwordlessapicorenext.azurewebsites.net/swagger

Click enter and you should be able to see the swagger page with available API endpoints:

If you get 500.30 In-Process Startup Failure error open console inside your WebApp as below:

When the console opens, type:

dotnet dotnetcore-passwordless.dll

The name of the dll you can find it in publish directory locally when we published the API locally:

If everything is ok, you should see turning circle that never stops. Most of the startup errors should appear here in console, very detailed:

Another way to deploy this API would be by packaging everything to Acure Container Registry and adding our image to Azure Container Instance with the help of Docker.

Thank you.

Accessing and manipulating your raw DataTable data with LINQ

As the heading suggests, I want to discuss a little bit on how to manipulate raw DataTable data that you get from some source. Mostly, and in many cases it is database, an SQL database maybe, where you fill your DataTable with rows from one or more SQL tables.

In the end , when you fill your DataSet object or, for the sake of this example, DataTable object, you can use that data in your business or front end Razor logic. You can use it wherever you want.

Let’s jump directly to code:

var DistinctGroupedColumns = dt.AsEnumerable().GroupBy(x => x.Field<Int32>("COLUMNONE")).Distinct().ToList();
foreach (var date in DistinctGroupedColumns)
{
//For each distinct record, get the associated list of records
var associatedListOfRecords = (from alofrec in dt.AsEnumerable() where alofrec.Field<Int32>("COLUMNONE").Date.ToString() == date.Key.ToString()
select alofrec).ToList();
int i = 0;
//The foreach below gets associated records per grouped group.
foreach (var timePart in associatedListOfRecords)
{
var COLUMNONE = Convert.ToDateTime(timePart.Field<DateTime>("CDateTime")).Date.ToString("dd/mm/yyyy");
var COLUMNTWO = Convert.ToDateTime(timePart.Field<DateTime>("CDateTime")).ToString("HH:mm");
var COLUMNTHREE = timePart.Field<String>("PhoneNumber").ToString();
var COLUMNFOUR = timePart.Field<String>("AccountNumber").ToString();
var COLUMNFIVE = timePart.Field<String>("HouseNumber").ToString();
// Do something with the column values, like print an HTML grid in Razor View, or send an SMTP email with those values.
// i is an additional variable, just in case, to keep record count for each group
i = i + 1;
}
}

On line 1, above, we get distinct list of rows based on COLUMNONE grouping. This will tell us how many groups and sections we have.As you can see, we already know the data type of our column. This is written and should be known manually by physically going to the database or reading back end logic associated with your datatable data.

On line five, we again run a query on our DataTable but filter it with LINQ Key keyword. The key is the property of the object on which we have done grouping before.

Then, on line 10, we iterate our list and get the column details values from it.

This code, as you can see in the comments, can be added on server side, and on client side in Razor View between the @{} .

The i, is there to hold the count of total items in each group.

Creating hyperlink [a href=”//””>Click Here ]

There are many different kinds of hyperlinks in HTML. I call the first one static which looks like so :

<a href =”https://www.domain.com”&gt; Click Here </a>

There is nothing special about this hyperlink. It is very basic, does what it is supposed to do, which is navigating to the address that you provided in href.

Next one, I call it semi-static, which navigates to exactly the page you want, like so:

<a href=”https://www.domain.com/page.html”>Click Here </a>

Next one, I call it dynamic link, because it has the ability to navigate to the same page but with different options on that page. It depends on the programmer what kind of options he or she want to show:

<a href=”https://www.domain.com/page.html?param1=123&param2=456&param3=789″>Click Here </a>

The final hyperlink that I wanted to mention is super dyamic, which looks like this:

<a href=”https://www.domain.com/api/345676/676545/876756799991/11232

The reason that I called it super dynamic is because it uses RESTful architecture and HTTP programming with GET, POST and PUT methods.But that is beyond the scope of this blog post.

Now, lets say, you have already WEB API written in C# and ASP.NET MVC. It has unique controller, and GET, POST and PUT methods in place. Meanwhile, an old website, written in classic ASP, with .asp file extension pages, needs to access our API to get full blown HTML page, and replace old page with newly designed cshtml view, by two ways:

  1. Either by clicking a link on an old ASP page and opening new cshtml view in new tab or window
  2. Or embedding our external cshtml view that our WEB API returns in an IFRAME or object in old classic ASP page.

Now, for either of the two cases, we need HREF. A link. A hyperlink. Besides, classic ASP page has different product numeric IDs, in a drop down list,  that need to be passed to WEB API to get static cshtml view, based on user selection.

So what do you think ? Which kind of hyperlink we will need ?

Obviously not static, and not semi static. We can use dynamic, but since we have WEB API C# in place, we need super dynamic hyperlink.

Now as you know, that passing pure product IDs in a query parameters is not a good idea, because someone very curious might want to try different ID instead of the one that is in our hyperlink, resulting him or her getting product that does not belong to him or her.

So, if your parameters are numerical, like 34523, or 88890889, and not strings, you could do something like this, from inside your classic ASP page:


<a target="myIframeInClassicASPPage" href="http://10.10.10.10:9988/api/values/<%=Response.Write(Server.URLEncode(Session("UserName&quot;)))%>/<%=Response.Write(698757 * Server.URLEncode(Session("product_id")))%>" >Click Here</a>

And in C# WEB API Controller, you will have a GET methos, that looks something like this:


public HttpResponseMessage Get(string u, string p)
{
Uri referrer = HttpContext.Current.Request.UrlReferrer;
dynamic myModel = new Models.MyModel();
CirculationMonitor.Controllers.HomeController hc = new HomeController();
string sd = DateTime.Now.ToShortDateString();
string ed = DateTime.Now.ToShortDateString();
var response = new HttpResponseMessage(HttpStatusCode.OK);
if (referrer != null)
{
hc.FillMysModel(myModel,(Convert.ToInt32(p) / 678957 ));
string myviewPath = HttpContext.Current.Server.MapPath(@"~/Views/Home/MyViewToReturn.cshtml");
var template = System.IO.File.ReadAllText(myviewPath);
string parsedView = RazorEngine.Razor.Parse(template, myModel);
response.Content = new StringContent(parsedView);
response.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("text/html");
}
else
{
response.Content = new StringContent("<html><body><h5>Not authorized. Please use the link on ASP page to getyour product</h5></body></html>");
response.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("text/html");
}
return response;
}

https://gist.github.com/thoughtsonprogramming/b87ae47fd93a53b5ee28b7de70fbff2f.js

If you notices, 678957 is being used on classic ASP page to multiply the product ID. It is like secret or salt in md5, something similar. And, same number above is used in WEB API c# code to divide the same number to get the original ID.

So building a hyperlink is not so easy it turns out to be. However simple it looks.

Also, first code block has a tag like this: target=myIframeInClassicASPPage“. It means it targets another tag with ID myIframeInClassicASPPage, which we decided to be iFrame. So we want the external cshtml view to load in iFrame, from ASP.NET MVC WEB API C# project in IIS.

Now, in case , and just in case, because I have stumbled across such a thing, your cshtml view does not load in classic ASP, or iFrame displays weird, check your IIS HTTP headers, and explicitly set  X-UA-Compatible property in HTTP Headers section ( You will have to define new HTTP header for your website ) to IE=edge.

This is for it to be cross browser and render properly in IE. I am writing this because I am assuming you are using scripts, bootstrap, font awesome, and all other libraries that need to load with cshtml page as well. So this helps, and works brilliantly !

The end result, is that we have built a super dunamic hyperlink that renders ASP.NET MVC views in classic ASP site page’s iFrame, with all of views scripts, references, etc, using WEB API.

You never know when you will need this !

Thank you.

How to securely access remote machine or server from your ASP.NET MVC website.

We will party later on. But for now, we need to get a small word out. It is nothing special until you are confronted with it, and, in production business environment, and not at your couch! So I only have a user name and password, which is splendid ! I have user name and password for remote machine, literally. Remote machine is Windows Server 2008 R2. Now, with all the threats flying around remote accesses, password managers, which I wrote one for my self indulgence here https://thoughtsonprogramming.wordpress.com/2018/08/26/password-manager/

, and facebook “Impersonation” hacking, which just happened days ago, I had a challenge to upload a file to remote server,very far away, having user name and password.

Relying on experience, and not listening no anyone around, I took the SHARE path. Meaning I will have to access a shared folder on that remote computer, and a share that I will have to ask sys admin to enable or create a share of parent folder myself.

I remind you, it is not childish game at my home. It is production environment. Things fly by fast.

First, where my ASP.NET MVC app resides, on IIS, we need to set its application pool identity access to the user name and password of remote machine. That way, it will allow our ASP.NET app to access remote server. IIS itself will let you play around and access network classes and namespaces.

So, after we have done that. We need a  Windows Share access. It looks the same like oAuth API access, where you first obtain a token, then use that token to do authorized things you want to do. But here , you get access to the share, and IIS handles the rest, meaning it lets you copy and transfer files.

Besides, by using “Share” way to transfer files, it is convenient for sys admin on the other side to see all current and live sessions that are open concerning this particular share. So security wise, very convenient.

And we, on the other hand, open share connection, and then close it once done.

Now, I am using C#, my best language so far.

So, from C#, in ASP.NET MVC, we need to first obtain an access right to remote folder, then copy my file.

I found the network class for my dear C# language, on the internet. I use it to access remote share. Here it is:

 


using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.InteropServices;
using System.Web;
namespace YourApp.Helpers
{
public static class NetworkShare
{
/// <summary>
/// Connects to the remote share
/// </summary>
/// <returns>Null if successful, otherwise error message.</returns>
public static string ConnectToShare(string uri, string username, string password)
{
//Create netresource and point it at the share
NETRESOURCE nr = new NETRESOURCE();
nr.dwType = RESOURCETYPE_DISK;
nr.lpRemoteName = uri;
//Create the share
int ret = WNetUseConnection(IntPtr.Zero, nr, password, username, 0, null, null, null);
//Check for errors
if (ret == NO_ERROR)
return null;
else
return GetError(ret);
}
/// <summary>
/// Remove the share from cache.
/// </summary>
/// <returns>Null if successful, otherwise error message.</returns>
public static string DisconnectFromShare(string uri, bool force)
{
//remove the share
int ret = WNetCancelConnection(uri, force);
//Check for errors
if (ret == NO_ERROR)
return null;
else
return GetError(ret);
}
#region P/Invoke Stuff
[DllImport("Mpr.dll")]
private static extern int WNetUseConnection(
IntPtr hwndOwner,
NETRESOURCE lpNetResource,
string lpPassword,
string lpUserID,
int dwFlags,
string lpAccessName,
string lpBufferSize,
string lpResult
);
[DllImport("Mpr.dll")]
private static extern int WNetCancelConnection(
string lpName,
bool fForce
);
[StructLayout(LayoutKind.Sequential)]
private class NETRESOURCE
{
public int dwScope = 0;
public int dwType = 0;
public int dwDisplayType = 0;
public int dwUsage = 0;
public string lpLocalName = "";
public string lpRemoteName = "";
public string lpComment = "";
public string lpProvider = "";
}
#region Consts
const int RESOURCETYPE_DISK = 0x00000001;
const int CONNECT_UPDATE_PROFILE = 0x00000001;
#endregion
#region Errors
const int NO_ERROR = 0;
const int ERROR_ACCESS_DENIED = 5;
const int ERROR_ALREADY_ASSIGNED = 85;
const int ERROR_BAD_DEVICE = 1200;
const int ERROR_BAD_NET_NAME = 67;
const int ERROR_BAD_PROVIDER = 1204;
const int ERROR_CANCELLED = 1223;
const int ERROR_EXTENDED_ERROR = 1208;
const int ERROR_INVALID_ADDRESS = 487;
const int ERROR_INVALID_PARAMETER = 87;
const int ERROR_INVALID_PASSWORD = 1216;
const int ERROR_MORE_DATA = 234;
const int ERROR_NO_MORE_ITEMS = 259;
const int ERROR_NO_NET_OR_BAD_PATH = 1203;
const int ERROR_NO_NETWORK = 1222;
const int ERROR_SESSION_CREDENTIAL_CONFLICT = 1219;
const int ERROR_BAD_PROFILE = 1206;
const int ERROR_CANNOT_OPEN_PROFILE = 1205;
const int ERROR_DEVICE_IN_USE = 2404;
const int ERROR_NOT_CONNECTED = 2250;
const int ERROR_OPEN_FILES = 2401;
private struct ErrorClass
{
public int num;
public string message;
public ErrorClass(int num, string message)
{
this.num = num;
this.message = message;
}
}
private static ErrorClass[] ERROR_LIST = new ErrorClass[] {
new ErrorClass(ERROR_ACCESS_DENIED, "Error: Access Denied"),
new ErrorClass(ERROR_ALREADY_ASSIGNED, "Error: Already Assigned"),
new ErrorClass(ERROR_BAD_DEVICE, "Error: Bad Device"),
new ErrorClass(ERROR_BAD_NET_NAME, "Error: Bad Net Name"),
new ErrorClass(ERROR_BAD_PROVIDER, "Error: Bad Provider"),
new ErrorClass(ERROR_CANCELLED, "Error: Cancelled"),
new ErrorClass(ERROR_EXTENDED_ERROR, "Error: Extended Error"),
new ErrorClass(ERROR_INVALID_ADDRESS, "Error: Invalid Address"),
new ErrorClass(ERROR_INVALID_PARAMETER, "Error: Invalid Parameter"),
new ErrorClass(ERROR_INVALID_PASSWORD, "Error: Invalid Password"),
new ErrorClass(ERROR_MORE_DATA, "Error: More Data"),
new ErrorClass(ERROR_NO_MORE_ITEMS, "Error: No More Items"),
new ErrorClass(ERROR_NO_NET_OR_BAD_PATH, "Error: No Net Or Bad Path"),
new ErrorClass(ERROR_NO_NETWORK, "Error: No Network"),
new ErrorClass(ERROR_BAD_PROFILE, "Error: Bad Profile"),
new ErrorClass(ERROR_CANNOT_OPEN_PROFILE, "Error: Cannot Open Profile"),
new ErrorClass(ERROR_DEVICE_IN_USE, "Error: Device In Use"),
new ErrorClass(ERROR_EXTENDED_ERROR, "Error: Extended Error"),
new ErrorClass(ERROR_NOT_CONNECTED, "Error: Not Connected"),
new ErrorClass(ERROR_OPEN_FILES, "Error: Open Files"),
new ErrorClass(ERROR_SESSION_CREDENTIAL_CONFLICT, "Error: Credential Conflict"),
};
private static string GetError(int errNum)
{
foreach (ErrorClass er in ERROR_LIST)
{
if (er.num == errNum) return er.message;
}
return "Error: Unknown, " + errNum;
}
#endregion
#endregion
}
}

view raw

NetworkShare.cs

hosted with ❤ by GitHub

As you can see from the code above, we use DLLImport to obtain access to native windows DLL, which is Mpr.dll.

We could have taken the WMI option, with all of its convenient operations, which allows you to control the remote machine. But we do not want to control. We want to copy and delete a file when we need to.

So the code above only opens and closes connection to our remote machine. It does not copy files. Later we will only use standard System.IO.File.Copy(source, destination) method from C# to do the copying, once the Share connection is open.

The idea is similar to oAuth when obtaining a token, then doing something with that token, only oAuth is stateless, meaning no session is kept alive, while our method here is real time and synchroneous, where connection is open until closed.

So, after we add this to our project. and in our ASP.NET MVC, we need to call that method like so. Of-course depending on your particular needs, and the needs of an app you are using. But for me, it worked charmly as so:


//this is a comment: // Supposedly your server where you want to copy your file is: 10.10.10.10
string unme = "administrator";
string password = "Y0Ou)rS0Ec)R0Ec)Y";
NetworkShare.ConnectToShare(@"\\10.10.10.10\YourSharedFolder", unme, password); //Connect with the new credentials
//check for the file existance first
if (System.IO.File.Exists(@"\\10.10.10.10\YourSharedFolder\Subfolder" + fileName))
{
//set attributes to normal because by default files are read only, but in our case we need to delete it first
System.IO.File.SetAttributes(@"\\10.10.10.10\YourSharedFolder\Subfolder" + fileName, FileAttributes.Normal);
//if it exists delete it
System.IO.File.Delete(@"\\10.10.10.10\YourSharedFolder\Subfolder" + fileName);
//copy new file
System.IO.File.Copy(currentEmgMessagePath, @"\\10.10.10.10\YourSharedFolder\Subfolder" + fileName + "");
//close network share
NetworkShare.DisconnectFromShare(@"\\10.10.10.10\YourSharedFolder", true);
}

view raw

CopyFile.cs

hosted with ❤ by GitHub

The bottom line here is that I am not using IMPERSONATION !

First, we authenticate ourselves, the thing that I like the most, almost like in oAuth protocol, only better !  :)))))))))))

Then, once o”Authenticated”, on our “Share”, we do whatever we want.

With so much money spent on security these days, looking forward to hear some responses on whether it is safe, or not, real or not, practical or not, true or not true, well, to access remote shares in that way.

Have any other clue on doing it stuntly in a different way, please comment.

Thank you.

 

 

SignalR-Selecting which online user,logged in or not,to send message to.How to practical guide.

Source code is at the end of this article, on Github.

One of the reasons that I like SignalR is its ability to identify users by simply them, opening your web page. I would also highly recommend using it for Push Notifications.

Unlike web services, it allows two different objects, residing on two different machines, and in two different applications, to talk to each other.

There are many complex applications for SignalR like scaling it on your own local server, or self host the HUB, or using it as a background worker behind the scenes.

Sometimes, SignalR looks to me as an API that does the mapping between your client and server side functions.But that is just me, no?

On the other hand, if NuGet has close to five million downloads of SignalR, as shown in the picture below, and this is on June 28,2018, then one must consider this technology when the need arises to work with Real Time Data.There is no need to reinvent the wheel, in my opinion.

But let’s start from simple things, once we are able to do that we will be able to jump to the next step.

What we are going to do is exactly the following:

-Create a web page for everybody to be able to visit it. Also, the user will have the option to log in with google, in order to identify him or her. This option is optional in SignalR, no obligation to identify oneself at all, because SignalR knows you without the need to know your name.

What I mean, in simple words, is that Admin, in our example, can send anonymous user a message, as well as authenticated user, with his or her name. For SignalR it does not matter, but for human logic, it matters. Let’s consider a case where small number of people are on the network, logged in to the site, and Admin wants to reach and send a message to particular person, then of-course, he needs to know other person’s name, that’s all !

To do this, and to give the user an option to identify himself or herself, we are going to use google sign in button:

google-sign-in-button

When user clicks on this button, he is authenticated with google, and we get the following information from him, as described by Google itself:


<html lang="en">
<head>
<meta name="google-signin-scope" content="profile email">
<meta name="google-signin-client_id" content="YOUR_CLIENT_ID.apps.googleusercontent.com">
<script src="https://apis.google.com/js/platform.js&quot; async defer></script>
</head>
<body>
<div class="g-signin2" data-onsuccess="onSignIn" data-theme="dark"></div>
<script>
function onSignIn(googleUser) {
// Useful data for your client-side scripts:
var profile = googleUser.getBasicProfile();
console.log("ID: " + profile.getId()); // Don't send this directly to your server!
console.log('Full Name: ' + profile.getName());
console.log('Given Name: ' + profile.getGivenName());
console.log('Family Name: ' + profile.getFamilyName());
console.log("Image URL: " + profile.getImageUrl());
console.log("Email: " + profile.getEmail());
// The ID token you need to pass to your backend:
var id_token = googleUser.getAuthResponse().id_token;
console.log("ID Token: " + id_token);
};
</script>
</body>
</html>

As you can see form the code, and we will use it, that after a user signs in with google, provided of-course that he has a google account, in return the admin, or the web page gets such info as Full Name, Given Name, Family Name, and Email. All this info is provided from Google. We can use this info, for convenience of both parties, and trustworthiness of both parties, the website, and the users as well, to store their information in our SQL database table below, and do the MAPPING.

Please replace this client id, with your own, when doing google sign in.

Mapping  will be done between user’s name, above, and SignalR ConnectionID ! Because when SignalR will want to send message to particular ConnectionID, we can refer in our message the user’s actual name that he or she has provided.

SignalR gives you freedom in this situation. It is nice.

Important note to remember, when testing on Firefox, everything works fine except that Firefox takes longer to close connection once the user closes browser or refreshes the page, so the OnDisconnected SignalR method will take a little bit to be invoked. On Chrome it is fast.

Let’s start by creating SQL table that will hold our connectionIDs for each new connection to our HUB, and user’s information, if he or she wants to identify himself or herself, no obligation whatsoever.First create database MySignalRUsers, then create the following table below:


USE [MySignalRUsers]
GO
/****** Object: Table [dbo].[UsersTemp] ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[UsersTemp](
[ID] [bigint] IDENTITY(1,1) NOT NULL,
[UniqueUserID] [nvarchar](max) NOT NULL,
[FullName] [nvarchar](max) NULL,
[GivenName] [nvarchar](max) NULL,
[FamilyName] [nvarchar](max) NULL,
[Email] [nvarchar](max) NULL,
[GoogleToken] [nvarchar](max) NULL,
CONSTRAINT [PK_UsersTemp] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

Now for the application logic, let’s start from the server or SignalR server, that will accept all connections, client’s and Admin’s on http port 8080, pick any unused port, it does not matter. We will make a console app, as described in SignalR examples on their site, but tweak it a bit. We will add two new methods:

public override Task OnConnected()

and

public override Task OnDisconnected

In OnConnected we will get the connectionID of new user, and insert it in our database, and later update it with Mapping information for each user if he or she decides to log in  and authenticate. And in OnDisconnected we will remove the whole row for a user that closed browser and ended the session, or disconnected.

The code below is Program.cs file, with it’s starting Main method, for console apps:


using System;
using Microsoft.AspNet.SignalR;
using Microsoft.Owin.Hosting;
using Owin;
using Microsoft.Owin.Cors;
using Microsoft.Owin;
using System.Threading.Tasks;
using System.Collections.Generic;
using System.Linq;
using System.Data;
using Microsoft.AspNet.SignalR.Hubs;
using Microsoft.AspNet.SignalR.Transports;
[assembly: OwinStartup(typeof(SignalRSelfHost.Startup))]
namespace SignalRSelfHost
{
class Program
{
static void Main(string[] args)
{
string url = "http://localhost:8080&quot;;
using (WebApp.Start(url))
{
Console.WriteLine("Server running on {0}", url);
Console.ReadLine();
}
}
}
class Startup
{
public void Configuration(IAppBuilder app)
{
app.UseCors(CorsOptions.AllowAll);
app.MapSignalR();
}
}
public class MyHub : Hub
{
// database
SRServer.EntityFramework.MySignalRUsersEntities db = new SRServer.EntityFramework.MySignalRUsersEntities();
public override Task OnConnected()
{
//add to the database
string ClientconnecetionID = Context.ConnectionId;
Console.WriteLine("ConnectionId: " + ClientconnecetionID);
using (var db = new SRServer.EntityFramework.MySignalRUsersEntities())
{
SRServer.EntityFramework.UsersTemp usersTemp = new SRServer.EntityFramework.UsersTemp
{
UniqueUserID = ClientconnecetionID
};
db.UsersTemp.Add(usersTemp);
db.SaveChanges();
}
return base.OnConnected();
}
public override Task OnDisconnected(bool stopCalled)
{
string ClientconnecetionID = Context.ConnectionId;
using (var db = new SRServer.EntityFramework.MySignalRUsersEntities())
{
var rToDel = db.UsersTemp.SingleOrDefault(x => x.UniqueUserID == ClientconnecetionID);
if (rToDel != null)
{
db.UsersTemp.Remove(rToDel);
db.SaveChanges();
}
}
return base.OnDisconnected(stopCalled);
}
public void Send(string name, string message)
{
string ClientconnecetionID = this.Context.ConnectionId;
LoggedInUsers();
Clients.Client(name).addMessage(name, message);
}
public void LoggedInUsers()
{
string ClientconnecetionID = Context.ConnectionId;
Console.WriteLine(ClientconnecetionID);
}
}
}

Now for the code above, we can build it in release mode, take the executable and put it on remote machine and server. It does not need to be localhost:8080.But if we do that, we will need to change the endpoint to match remote server’s endpoint.

Now for the client part, we will create a standard asp.net web application, with two pages: Default.aspx which will be for all users of our website, and Admin.aspx which will be only for administrator who will ping or send messages selectively to all current online users.

The code for Default.aspx and Default.aspx.cs is below:


<%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="WebApplication2._Default" %>
<asp:Content ID="BodyContent" ContentPlaceHolderID="MainContent" runat="server">
<meta name="google-signin-scope" content="profile email">
<meta name="google-signin-client_id" content="583675786996-cvrblhj6p7n4rc7t4pjb7t5jragdnh6g.apps.googleusercontent.com">
<script src="https://apis.google.com/js/platform.js&quot; async defer></script>
<!–Reference the SignalR library. –>
<script src="Scripts/jquery.signalR-2.2.3.min.js"></script>
<!–Reference the autogenerated SignalR hub script. –>
<script src="http://localhost:8080/signalr/hubs"></script&gt;
<!–Add script to update the page and send messages.–>
<script type="text/javascript">
$(function () {
//Set the hubs URL for the connection
$.connection.hub.url = "http://localhost:8080/signalr&quot;;
// Declare a proxy to reference the hub.
var chat = $.connection.myHub;
// Create a function that the hub can call to broadcast messages.
chat.client.addMessage = function (name, message) {
callNotify(message);
};
// Start the connection.
$.connection.hub.start().done(function () {
$('#connectionidfield').val($.connection.hub.id);
//console.log('Now connected, connection ID=' + $.connection.hub.id);
});
});
function callNotify(m) {
$.notify(m, { type: "success", blur: 0.2, delay: 0 });
}
function onSignIn(googleUser) {
var profile = googleUser.getBasicProfile();
console.log("ID: " + profile.getId()); // Don't send this directly to your server!
console.log('Full Name: ' + profile.getName());
console.log('Given Name: ' + profile.getGivenName());
console.log('Family Name: ' + profile.getFamilyName());
console.log("Image URL: " + profile.getImageUrl());
console.log("Email: " + profile.getEmail());
// The ID token you need to pass to your backend:
var id_token = googleUser.getAuthResponse().id_token;
console.log("ID Token: " + id_token);
var ID = profile.getId();
var FullName = profile.getName();
var GivenName = profile.getGivenName();
var FamilyName = profile.getFamilyName();
var Email = profile.getEmail();
var GoogleTokenID = id_token;
var UserConnectionID = $('#connectionidfield').val();
//Ajax call server side update
$.ajax({
type: "POST",
contentType: "application/json; charset=utf-8",
url:'<%= ResolveUrl("Default.aspx/BindDatatable") %>',
data: '{uniqueUserID:"'+ UserConnectionID +'",fullName:"' + FullName + '",givenName:"' + GivenName + '",familyName:"' + FamilyName + '",email:"' + Email + '",googleToken:"'+GoogleTokenID+'"}',
dataType: "json",
success: function (data) {
//alert(data.d);
},
failure: function(r) {
alert(r.d);
},
error: function(response) {
alert(response.d);
}
});
};
</script>
<header class="bg-primary text-white">
<div class="container text-left">
<h1>SignalR example</h1>
<p class="lead">If you want, you can log in with google to proceed<br />
<sub><a href="https://thoughtsonprogramming.wordpress.com/">thoughtsonprogramming.wordpress.com</a></sub&gt;
<div class="g-signin2" data-onsuccess="onSignIn" data-theme="dark"></div>
</p>
<input type="hidden" id="connectionidfield" />
</div>
</header>
</asp:Content>

view raw

Default.aspx

hosted with ❤ by GitHub

And the server side code for Default.aspx, which is Default.aspx.cs


using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Services;
using System.Web.UI;
using System.Web.UI.WebControls;
namespace WebApplication2
{
public partial class _Default : Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
[WebMethod]
public static string BindDatatable(string uniqueUserID,string fullName, string givenName, string familyName, string email, string googleToken)
{
string ConnectionIDUpdated = "false";
using (var db = new WebApplication2.MyEntityFramework.MySignalRUsersEntities())
{
var userDataRecord = (from c in db.UsersTemp
where (c.UniqueUserID == uniqueUserID)
select c).SingleOrDefault();
if (userDataRecord != null)
{
userDataRecord.FullName = fullName;
userDataRecord.GivenName = givenName;
userDataRecord.FamilyName = familyName;
userDataRecord.Email = email;
userDataRecord.GoogleToken = googleToken;
db.SaveChanges();
ConnectionIDUpdated = "true";
}
}
return ConnectionIDUpdated;
}
}
}

view raw

Default.aspx.cs

hosted with ❤ by GitHub

In the code above, we marked BindDataTable as WebMethod because we will call it from front end page with Ajax call and POST, in order to add user Mapping for each connection ID and bind logically which connection ID is for which User:

url:'<%= ResolveUrl(“Default.aspx/BindDatatable”) %>’,

Now the Admin page or Admin.aspx, which will be only for Administrators:


<%@ Page Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="Admin.aspx.cs" Inherits="WebApplication2.Admin" %>
<asp:Content ID="BodyContent" ContentPlaceHolderID="MainContent" runat="server">
<!–Reference the SignalR library. –>
<script src="Scripts/jquery.signalR-2.2.3.min.js"></script>
<!–Reference the autogenerated SignalR hub script. –>
<script src="http://localhost:8080/signalr/hubs"></script&gt;
<script type="text/javascript">
$(function () {
//Set the hubs URL for the connection
$.connection.hub.url = "http://localhost:8080/signalr&quot;;
// Declare a proxy to reference the hub.
var chat = $.connection.myHub;
// Create a function that the hub can call to broadcast messages.
chat.client.addMessage = function (name, message) {};
//Start the connection
$.connection.hub.start().done(function () {
$('#sendmessage').click(function () {
chat.server.send($(".allLoggedInUsers option:selected").text().split('|')[0].toString(), $('#message').val());
});
});
});
</script>
<div class="row">
<h1>Admin Page<sup><a href="https://thoughtsonprogramming.wordpress.com/">thoughtsonprogramming.wordpress.com</a></sup></h1&gt;
</div>
<div class="row">
<div class="col-md-6">
<select runat="server" multiple id="allLoggedInUsers" class="allLoggedInUsers" style="width:100%;max-width:100%; border: none; background-color: lightgray">
</select>
</div>
<div class="col-md-6">
<input type="text" id="message" style="width: 100%; height: 67px" />
<input type="button" id="sendmessage" value="Send" style="width: 250px; height: 67px" />
</div>
</div>
</asp:Content>

view raw

Admin.aspx

hosted with ❤ by GitHub

And of-course the code behind file for Admin.aspx:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Services;
using System.Web.UI;
using System.Web.UI.WebControls;
namespace WebApplication2
{
public partial class Admin : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
WebApplication2.Models.UsersData uData = new WebApplication2.Models.UsersData();
using (var db = new WebApplication2.MyEntityFramework.MySignalRUsersEntities())
{
var userIDsList = (from c in db.UsersTemp
select new Models.UsersData() { userID = c.UniqueUserID, FullName=c.FullName, GivenName=c.GivenName,FamilyName=c.FamilyName,Email=c.Email,GoogleToken=c.GoogleToken }).ToList();
foreach (var id in userIDsList)
{
if (id.FullName != null)
{
uData.userIDS.Add(id.userID.ToString() + "|" + id.FullName.ToString());
}
else
{
uData.userIDS.Add(id.userID.ToString());
}
}
}
allLoggedInUsers.Items.Clear();
for (int i = 0; i < uData.userIDS.Count; i++)
{
allLoggedInUsers.Items.Add(new ListItem(uData.userIDS[i].ToString()));
}
}
}
}

view raw

Admin.aspx.cs

hosted with ❤ by GitHub

In the code above, we only load all currently connected users and bind them to our multi drop down list.

To test this, start SignalR server, then launch your website, the default page will look as below:

Default

The server, will look as picture below, with already several connections happened or happening:

server

Now open the Admin page or click on the link “Admin Page” in a new tab or new browser window:

adminsignalr

The first multi drop down holds connection IDs and their names in case one Signed In with “Sign In With Google” button. If you refresh the page it will update all the connection IDs with their details respectively. For example if you take a careful look at the picture above, the second ID has no name. This is because this ID is for the Admin page actually, since Admin page also initiated SignalR connection.

You can play with it depending on your needs of-course.

Now, while on this page, select any ID, except the Admin’s ID, write some text in text box, and hit Send. Now switch to client Default.aspx window, and you should see the picture below. with Notification popping up on client side with the text that we sent:

clientsidenotification

In Summary, we have implemented a solution where Administrator of a website can ping or send messaged to currently live logged in or not logged in users. The logging in is performed with Google Sign In Button, which , to implement, you first need to have google account and add new project in Developer’s Console, then get client ID and secret to use, as shown in the cod above.

The code for the server part of this project is on GitHub:

https://github.com/thoughtsonprogramming/SRServer

Client side part is also on GitHub:

https://github.com/thoughtsonprogramming/WebApplication2

Moving from Classic ASP to ASP.NET Mobile with Credit Card Payment API.

Sometimes a client asks to update his current web solution, with credit card payment page to mobile and web solution.

The functionality should stay intact, but user interface must change to accommodate mobile devices view ports.

And not only that. On the old classic ASP website, a client has credit card payment page, linked directly to their API, which uses iFrame with their src property.

I especially prepared an example explanation sketch to illustrate the old asp page, to some extent, architecture:

As you can see here the page looks a little bit similar to the old style “Secure Cross Domain Communication in the Browser” with IFrame URL technique except that when the user presses the submit button on pop up iFrame box, we call the Submit Page function that will trigger AJAX POST request, and onSuccess: we will set the content of the whole ‘htmlContentDiv’ with the TOKEN, plus client details.

The pop up iFrame was just to generate token. It do not get confused here, we are not using OAuth, neither we are using the iFrame URL technique. What we are doing here is we are simply generating secret token ( from client API) and returning it to us via Ajax and JavaScript, but before that generating secure link with correct post parameters. That’s all.

Now, when moving this little bit old functionality to new ASP.NET pages, we can do the following:

As you can see, on page load we also generate our secure secret link with correct parameters, and we directly assign it to the src attribute of our visible iFrame, without the need of clicking on a separate button to generate the popular iFrame. The src of the iFrame is generated directly when we load the page.

Then, when we hit the actual submit button, we will call JavaScript Function SubmitMyForm, which will take only the credit card number, send it via JavaScript Ajax post method, receive a response with generated token, then create new form programmatically from JavaScript, and finally submit our “on the fly” generated form, with token .

As you can see all this functionality was needed just to generate token, because the client does not want to submit a form with raw credit card numbers, compliance works that way.

We first generate token via AJAX and then submit a form via AJAX also, with token and not bare credit card number.

Read all your emails anywhere with C#

The good thing about technology, is that it allows us transcend, upon our will and choice,the need to cut more and more trees to produce everyday paper.

Going Eco-Friendly is always a good and healthy choice, and instead of sending regular mail, with paper inside it, you send E-Mail. And instead of printing your email out on a sheet of paper, you can read it on your preferable tablet device or phone.

So what if you want to read all your emails from gmail, for example, not using browser at all. But instead a special kind of program that you can embed in a device of your choice and read all your emails electronically, like Alexa, telling Alexa Amazon “Alexa, read all my gmail emails !”

Sometimes, you have a need to read all, part, or latest received emails from your email provider, and store only needed parts and data within them in database, as well.

There could be two case scenarios. One is that you will be reading structured data and another that you will be reading unstructured email data. Most of the cases it is the last case ( unstructured ).

The question arises, why would one want to read this data ? The answer is to store part of it in our database for clients to use.

Suppose a robot sends automated emails with configuration and updated information about all printers on different networks, like their start time, shut down time, ink levels, number of papers printed, and a lot more information.

Our program should grab new emails , read them, take necessary data in them, and store that data in our database for other people to be able to see using front end web based interface.

Let’s say that printer was programmed to send the following information, without spaces or new lines in the beginning:

PRINTER ID:2233QWERTY
PRINTER NAME: SPACER
PRINTERTYPE:SUPERTYPE
TIMEOFDAY:SUNSHINE
PAPERSPRINTED:1000000
DOUBLESIDED:NO
USERS:ALL
, with the following subject line:
MY PRINTER STATS
I will use open source library, to read email, and I am going to use Gmail as test server. Particularly Gmail POP3 server.
We are also going to authenticate ourselves in a slightly different way. Gmail allows authentication via oAuth, but since I blogged a lot about oAuth, I will use another way.
We will append “recent” word before our user name, as such: “recent:sendtestsms@gmail.com”. The word recent in gmail before user name allows us to get ALL emails for the past 30 days, downloaded or not, because in Gmail, once you download your messages once, you cannot download them again.
And again, we could have created an app, added Gmail API and used oAuth, but this way is different. It will also teach me and you how to use open source free for all library for POP and IMAP called OpenPop.NET.
First, let’s create database with single table in our SQL database:


USE [master]
GO
/****** Object: Database [PRINTERSTATSFROMEMAIL] Script Date: 2018-06-18 20:41:14 ******/
CREATE DATABASE [PRINTERSTATSFROMEMAIL]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'PRINTERSTATSFROMEMAIL', FILENAME = N'C:\Users\JINAN\PRINTERSTATSFROMEMAIL.mdf' , SIZE = 8192KB , MAXSIZE = UNLIMITED, FILEGROWTH = 65536KB )
LOG ON
( NAME = N'PRINTERSTATSFROMEMAIL_log', FILENAME = N'C:\Users\JINAN\PRINTERSTATSFROMEMAIL_log.ldf' , SIZE = 8192KB , MAXSIZE = 2048GB , FILEGROWTH = 65536KB )
GO
IF (1 = FULLTEXTSERVICEPROPERTY('IsFullTextInstalled'))
begin
EXEC [PRINTERSTATSFROMEMAIL].[dbo].[sp_fulltext_database] @action = 'enable'
end
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET ANSI_NULL_DEFAULT OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET ANSI_NULLS OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET ANSI_PADDING OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET ANSI_WARNINGS OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET ARITHABORT OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET AUTO_CLOSE OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET AUTO_SHRINK OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET AUTO_UPDATE_STATISTICS ON
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET CURSOR_CLOSE_ON_COMMIT OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET CURSOR_DEFAULT GLOBAL
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET CONCAT_NULL_YIELDS_NULL OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET NUMERIC_ROUNDABORT OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET QUOTED_IDENTIFIER OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET RECURSIVE_TRIGGERS OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET DISABLE_BROKER
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET AUTO_UPDATE_STATISTICS_ASYNC OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET DATE_CORRELATION_OPTIMIZATION OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET TRUSTWORTHY OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET ALLOW_SNAPSHOT_ISOLATION OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET PARAMETERIZATION SIMPLE
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET READ_COMMITTED_SNAPSHOT OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET HONOR_BROKER_PRIORITY OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET RECOVERY SIMPLE
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET MULTI_USER
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET PAGE_VERIFY CHECKSUM
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET DB_CHAINING OFF
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET FILESTREAM( NON_TRANSACTED_ACCESS = OFF )
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET TARGET_RECOVERY_TIME = 60 SECONDS
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET DELAYED_DURABILITY = DISABLED
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET QUERY_STORE = OFF
GO
USE [PRINTERSTATSFROMEMAIL]
GO
ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 0;
GO
ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET MAXDOP = PRIMARY;
GO
ALTER DATABASE SCOPED CONFIGURATION SET LEGACY_CARDINALITY_ESTIMATION = OFF;
GO
ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET LEGACY_CARDINALITY_ESTIMATION = PRIMARY;
GO
ALTER DATABASE SCOPED CONFIGURATION SET PARAMETER_SNIFFING = ON;
GO
ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET PARAMETER_SNIFFING = PRIMARY;
GO
ALTER DATABASE SCOPED CONFIGURATION SET QUERY_OPTIMIZER_HOTFIXES = OFF;
GO
ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET QUERY_OPTIMIZER_HOTFIXES = PRIMARY;
GO
ALTER DATABASE [PRINTERSTATSFROMEMAIL] SET READ_WRITE
GO

Now let’s create our table called PRINTERSUNSHINE.


USE [PRINTERSTATSFROMEMAIL]
GO
/****** Object: Table [dbo].[PRINTERSUNSHINE] Script Date: 2018-06-18 20:44:17 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[PRINTERSUNSHINE](
[ID] [bigint] IDENTITY(1,1) NOT NULL,
[PRINTERID] [nvarchar](50) NULL,
[PRINTERNAME] [nvarchar](50) NULL,
[PRINTERTYPE] [nvarchar](50) NULL,
[TIMEOFDAY] [nvarchar](50) NULL,
[PAPERSPRINTED] [nvarchar](50) NULL,
[DOUBLESIDED] [nvarchar](50) NULL,
[USERS] [nvarchar](50) NULL,
CONSTRAINT [PK_PRINTERSUNSHINE] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO

Now head to Visual Studio. I am using VS 2015. You can use any version you like. I like community versions. Create new console APP or solution, and issue this command in Tools > NoGet Package Manager > Package Manager Console:

Install-Package OpenPop.NET -Version 2.0.6.1120

Right click on project and create new folder called: EF. We are going to add our ADO.NET Data Entity Model there.Call it PRINTERSTATSFROMEMAIL, and follow step by step screens to add entity model from our newly created table to the console app.

After that, and in Program.cs, at the top add two using statements:

using OpenPop.Mime;
using OpenPop.Pop3;

This lets us use the OpenPop classes with POP3 functionality.

And below is the actual code for Program.cs with Main method that is an entry point for console apps:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using OpenPop.Mime;
using OpenPop.Pop3;
namespace ReadEmailsFromPrinterAndStoreThem
{
class Program
{
static void Main(string[] args)
{
bool f = FetchAllMessages();
}
public static bool FetchAllMessages()
{
bool done = false;
EF.PRINTERSTATSFROMEMAILEntities dbps = new EF.PRINTERSTATSFROMEMAILEntities();
// The client disconnects from the server when being disposed
using (Pop3Client client = new Pop3Client())
{
client.Connect("pop.gmail.com", 995, true);
client.Authenticate("recent:yournickname@gmail.com", "PASSWORD");
int messageCount = client.GetMessageCount();
for (int i = messageCount; i > 0; i–)
{
Message eMsg = client.GetMessage(i);
string result = System.Text.Encoding.UTF8.GetString(eMsg.RawMessage);
string[] lines = result.Split(new[] { "\r\n", "\r", "\n" }, StringSplitOptions.None);
for (int ij = 0; ij < lines.Length; ij++)
{
if (lines[ij].ToString().Contains("Content-Type: text/plain"))
{
string PRINTER_ID = lines[ij + 2].ToString().Split(':').Skip(1).FirstOrDefault();
string PRINTER_NAME = lines[ij + 3].ToString().Split(':').Skip(1).FirstOrDefault();
string PRINTERTYPE = lines[ij + 4].ToString().Split(':').Skip(1).FirstOrDefault();
string TIMEOFDAY = lines[ij + 5].ToString().Split(':').Skip(1).FirstOrDefault();
string PAPERSPRINTED = lines[ij + 6].ToString().Split(':').Skip(1).FirstOrDefault();
string DOUBLESIDED = lines[ij + 7].ToString().Split(':').Skip(1).FirstOrDefault();
string USERS = lines[ij + 8].ToString().Split(':').Skip(1).FirstOrDefault();
EF.PRINTERSUNSHINE psunshine = new EF.PRINTERSUNSHINE
{
PRINTERID = PRINTER_ID,
PRINTERNAME = PRINTER_NAME,
PRINTERTYPE = PRINTERTYPE,
TIMEOFDAY = TIMEOFDAY,
PAPERSPRINTED = PAPERSPRINTED,
DOUBLESIDED = DOUBLESIDED,
USERS = USERS
};
dbps.PRINTERSUNSHINE.Add(psunshine);
dbps.SaveChanges();
}
}
client.DeleteMessage(i);
}
client.Disconnect();
client.Dispose();
done = true;
return done;
}
}
}
}

We can use our imagination, or depending on requirements for example, and add this code to Windows Service, that will repeat itself each couple of seconds. Or we can create Java Script function with timer and SetTimeOut function, and with the help of XMLHttpRequest AJAX object, call our function that is in Home Controller, or at some REST endpoint. In short we can create our own API and then call it from anywhere.

Please do not forget to change email and password with your actual email and password.

Moreover, and as I mentioned before, data inside email can be structured, unstructured, HTML based email or TEXT based email, with different encoding. It happened to me once with structured data, so to say. Printer was sending structured TEXT based emails, hundreds per day, with fixed line positions, meaning heading for printer name would be on line one always. Then one day, I asked my system administrator to resend me new email from new printer that he added,in order for me to add code to match its name. Me, without  thinking, replied to his email, saying “Thanks”, that’s it.

The next day, we stopped receiving any data in database, if you query for today’s data, there is nothing in database, meaning the program did not insert anything.

I opened the original program that was doing email GET,which I wrote myself, everything seemed fine, with new additions as well, but somehow, it was throwing an exception index out of range. I also opened a temp text file which I use to store each new email in a text format, after I read it,and there is was ! The email that threw an exception was there, because the file was not overwritten with new email. What is interesting is that it was not regular email with printer statistics, but it started with <!DOCTYPE html PUBLIC meaning that it is HTML document. While all emails from printer come only plain TEXT based, for better readability and better and clearer handling of its content. Also, all lines that I hard-coded , because I mentioned that I was dealing with structured data, did the following mapping,if you wish:

Line one contains Printer Name and Printer Model

Line five contains three different printer variables that I need

And so on…

Their order never changes.

Now, this email, first of all was HTML email, meaning that  HTML tags were in text based form, like <br> <hr> <p>. They were all there. And, it was not word wrapped,nor per line at all. In short, Unstructured. And this is because it came from my email, which sends emails in HTML format.

I then added a global variable, to hold which line we are in while traversing the file, and replaced each line index word with that global constant. Also, I deleted the reply,because it contained already duplicate data, and in both cases, I will not need it.

The program worked as charm. With structured data, grabbing all emails’ content, and inserting it in the database, then deleting the actual email.

 

Add Chart to ASP.NET MVC project.

Adding charts to your project, usually takes a little bit thinking. Because there are many open source plugins available, but choosing the right one that fits your needs, and the needs of your client could be tricky sometimes.

I choose Chart.js. It is lightweight JavaScript open source plugin available on GitHub because it has all basic functionality that I need, with a lot of easy customization options, including, for example, percentage display of your data, multiple data sets is a single chart, labeling plugin if you want, coloring choices, and nice hover style options.

It is also very simple to set its size to fit desktop, tablet, and mobile devices.

And as you might have guessed, mobile internet browsers are increasing day by day, with smart phones becoming the preferred choice for everyday busy users, and Chart.js does an amazing job of displaying our charts on mobile devices, in order for users to see clearly our data, and very fast.

In this project, I am going to use also Bootstrap.css, for presentation purposes, and for mobile and tablet visibility. So we are going to include only two references to two libraries: Bootstrap.css and Chart.js.

First, we will create a new ASP.NET MVC project, and in Models folder we will add new class called ChartModel.cs. The class is below:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace AddingChartToASPNetMVC.Models
{
public class ChartModel
{
//First chart points
public string firstchart_point_a;
public string firstchart_point_b;
public string firstchart_point_c;
//Second chart points – data set one
public string secondchart_ds1_point_a;
public string secondchart_ds1_point_b;
public string secondchart_ds1_point_c;
public string secondchart_ds1_point_e;
public string secondchart_ds1_point_f;
//Second chart points – data set two
public string secondchart_ds2_point_a;
public string secondchart_ds2_point_b;
public string secondchart_ds2_point_c;
public string secondchart_ds2_point_e;
public string secondchart_ds2_point_f;
}
}

view raw

ChartModel.cs

hosted with ❤ by GitHub

As you can see from above, our first chart has only one data set, with three data points. Our second chart, has two data sets with five data points each.

Now open _Layout.cshtml, which is our layout page for Index view, and add the code below:


<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Add chart in ASP.NET MVC</title>
<link href="~/Content/bootstrap.css" rel="stylesheet" />
<script src="~/Content/Chart.js"></script>
</head>
<body>
<div class="jumbotron jumbotron-fluid">
<div class="container">
<h1 class="display-4">Adding charts to ASP.NET MVC project</h1>
<p class="lead">
This page shows how to add two charts, PIE and BAR, with percentages display from actual values.
Bootstrap adds cross device capability, and Chart.js adds the actual charts. Labelling plugin for Chart.js from github is also used
to draw percent labels directly on the chart itself.
<br />
<h5><a href="https://thoughtsonprogramming.wordpress.com">Thoughts on Programming</a></h5>
</p>
</div>
</div>
@RenderBody()
</body>
</html>

view raw

_Layout.cshtml

hosted with ❤ by GitHub

As you can see from above code, we have a reference to bootstrap.css and Chart.js inside our project. Then we add jumbotron for presentation purposes, and add a title and description.

Next, and the our core functionality is our Index.cshtml view:


@model AddingChartToASPNetMVC.Models.ChartModel
@{
ViewBag.Title = "Home Page";
}
<div class="container">
<div class="row">
<div class="col-sm-6">
<div style="width:360px; height:300px;">
<canvas id="pieChart" style="width:360px; height:300px;"></canvas>
<script>
var canvas = document.getElementById("pieChart");
var ctx = canvas.getContext('2d');
Chart.defaults.global.defaultFontColor = 'black';
Chart.defaults.global.defaultFontSize = 12;
var theHelp = Chart.helpers;
var data = {
labels: ["VALUE A", "VALUE B", "VALUE C"],
datasets: [{
fill: true,
backgroundColor: [
'#ff6384',
'#9bbb59',
'#36a2eb'
],
data: [@Model.firstchart_point_a,@Model.firstchart_point_b, @Model.firstchart_point_c],
borderColor: ['#ff6384', '#9bbb59', '#36a2eb'],
borderWidth: [1, 1, 1]
}]
};
var options = {
title: {
display: true,
text: 'First Chart',
position: 'top'
}
};
// Chart declaration:
var myPieChart = new Chart(ctx, {
type: 'pie',
data: data,
options: options
});
//Plugin from githubExample:
//https://github.com/chartjs/Chart.js/blob/master/samples/data_labelling.html
Chart.plugins.register({
afterDatasetsDraw: function (chartInstance, easing) {
// To only draw at the end of animation, check for easing === 1
var ctx = chartInstance.chart.ctx;
var numberOfDatasets = 0;
chartInstance.data.datasets.forEach(function (dataset, i) {
numberOfDatasets = numberOfDatasets + 1;
});
if (numberOfDatasets == 1) {
chartInstance.data.datasets.forEach(function (dataset, i) {
var meta = chartInstance.getDatasetMeta(i);
if (!meta.hidden) {
meta.data.forEach(function (element, index) {
// Draw the text in black, with the specified font
ctx.fillStyle = 'black';
var fontSize = 15;
var fontStyle = 'normal';
var fontFamily = 'Helvetica Neue';
ctx.font = Chart.helpers.fontString(fontSize, fontStyle, fontFamily);
// Just naively convert to string for now
var dataString;
if ((dataset.data[0] + dataset.data[1] + dataset.data[2]) == "0") {
dataString = 0;
}
else {
dataString = Math.round((dataset.data[index] * 100) / (dataset.data[0] + dataset.data[1] + dataset.data[2]));
}
// Make sure alignment settings are correct
ctx.textAlign = 'center';
ctx.textBaseline = 'middle';
var padding = 5;
var position = element.tooltipPosition();
ctx.fillText(dataString + '%', position.x, position.y – (fontSize / 2) – padding);
});
}
});
}
}
});
</script>
</div>
</div>
<div class="col-sm-6">
<div style="width:365px; height:300px;text-align:center">
<canvas id="myChart2" style="width:365px; height:300px;"></canvas>
<script>
var ctx = document.getElementById("myChart2").getContext("2d");
var data = {
labels: ["LABEL 1", "LABEL 2", "LABEL 3", "LABEL 4", "LABEL 5"],
datasets: [{
label: "VALUE ONE",
backgroundColor: "#9bbb59",
data: [@Model.secondchart_ds1_point_a, @Model.secondchart_ds1_point_b, @Model.secondchart_ds1_point_c, @Model.secondchart_ds1_point_e, @Model.secondchart_ds1_point_f]
}, {
label: "VALUE 2",
backgroundColor: "#36a2eb",
data: [@Model.secondchart_ds2_point_a, @Model.secondchart_ds2_point_b, @Model.secondchart_ds2_point_c, @Model.secondchart_ds2_point_e, @Model.secondchart_ds2_point_f]
}]
};
var myBarChart = new Chart(ctx, {
type: 'bar',
data: data,
options: {
responsive: true,
animation: false,
legend: {
display: true
},
title: {
display: true,
text: 'Second Chart'
},
barValueSpacing: 20,
scales: {
yAxes: [{
ticks: {
min: 0,
}
}]
}
}
});
</script>
</div>
</div>
</div>
</div>

From the code above, we need to put our canvas tag inside another outer div, that is in order for it to fit properly when viewed on small devices:

<div style=width:360px; height:300px;>

The next important javascript variable inside script tags is data. It holds the actual points that we want to present:

data: [@Model.secondchart_ds1_point_a, @Model.secondchart_ds1_point_b, @Model.secondchart_ds1_point_c, @Model.secondchart_ds1_point_e, @Model.secondchart_ds1_point_f]

Those are model values that we set in the home controller.And this is our home controller:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
namespace AddingChartToASPNetMVC.Controllers
{
public class HomeController : Controller
{
public ActionResult Index()
{
AddingChartToASPNetMVC.Models.ChartModel myChartModel = new Models.ChartModel();
//Data points to feed our first chart
myChartModel.firstchart_point_a = "60";
myChartModel.firstchart_point_b = "500";
myChartModel.firstchart_point_c = "30";
//Data points to feed First Dataset of our Second Chart
myChartModel.secondchart_ds1_point_a = "10";
myChartModel.secondchart_ds1_point_b = "20";
myChartModel.secondchart_ds1_point_c = "30";
myChartModel.secondchart_ds1_point_e = "40";
myChartModel.secondchart_ds1_point_f = "50";
//Data points to feed Second Dataset of our Second Chart
myChartModel.secondchart_ds2_point_a = "15";
myChartModel.secondchart_ds2_point_b = "25";
myChartModel.secondchart_ds2_point_c = "37";
myChartModel.secondchart_ds2_point_e = "48";
myChartModel.secondchart_ds2_point_f = "53";
return View("Index",myChartModel);
}
public ActionResult About()
{
ViewBag.Message = "Your application description page.";
return View();
}
public ActionResult Contact()
{
ViewBag.Message = "Your contact page.";
return View();
}
}
}

As you can see from the code above, we set our points in the Index Action result, and return our Model to our view, that we created earlier:

return View(Index,myChartModel);

Also, in order to draw our data points on the chart itself, each percentage in its according place on the Pie chart, we need to use plugin, as shown above. To draw only on first chart, our first chart has only one data set, so we check for this condition inside our plugin as such:

First we initiate a variable to hold number of data sets, because this plugin loops for each chart on the page, thus letting us know how much data sets each chart has:

var numberOfDatasets = 0;

And here we get the actual number of data sets:

chartInstance.data.datasets.forEach(function (dataset, i) {numberOfDatasets = numberOfDatasets + 1;});

Since we know that our Pie chart has only one data set, we can check for its condition here:

if (numberOfDatasets == 1) {

And, accordingly, if the condition is true, continue to draw the percentages. The percentages, we need to generate by ourselves too, they are not included. To do this, we compute each percentage and put it in our datastring variable as such:

dataString = Math.round((dataset.data[index] * 100) / (dataset.data[0] + dataset.data[1] + dataset.data[2]));

And then display the dataString  on the chart itself with appended % string as such:

ctx.fillText(dataString + ‘%’, position.x, position.y (fontSize / 2) padding);

And in the end we get the following Pie chart, with percentages:

PIE-CHART

And , because we specified the data sets condition, the second chart we get it with no percentages, as such:

BAR-CHART

Also, working video of the project:

 

The source code of the project is on GitHub:

https://github.com/thoughtsonprogramming/AddingChartToASPNetMVC

How to create Salesforce Case in REST and WSDL using Salesforce API

Source code on Github:

REST

https://github.com/thoughtsonprogramming/SALESFORCE_REST_create_case

WSDL

https://github.com/thoughtsonprogramming/SALESFORCE_WSDL_create_case

We can create Cases in Salesforce in two different ways. One is using REST API, and call the POST method at the endpoint that Salesforce has provided, pass required parameters, and create a Case.

The other one is using WSDL, and SOAP, as web services, and use their exposed methods and API.But to use the WSDL API,we need to download physically to our machine WSDL FILE from Salesforce, store it on a disk, and add reference to it locally to our Visual Studio Solution.There is no online link to the web service where you can simply point your discovery tool and see all the methods exposed.

1

When using REST call, it depends which Salesforce Account you have opened. For authentication, and if you are on Sandbox Account of your company, you need to use:

https://test.salesforce.com/services/oauth2/token

On the other hand if you have a developer Salesforce Account, and to authenticate successfully with oauth,you need to use:

https://login.salesforce.com/services/oauth2/token

As well, and it is a necessity,if you are using REST API to authenticate yourself and then create a case, then you need to create a Connected App.

If you are on a sandbox account under your company, next to your logged in name, click on setup. Under Quick Links below, then under App, click on Manage Apps. Scroll to the bottom and you will see the heading Connected Apps, to the right of it there is New button. Click on it to create new connected app for oAuth REST authentication purposes.

You will be presented with a form, with red horizontal bars, meaning they are required.Filling only the required ones is enough.

-Connected App Name: Any Name you want for your app.

-API Name: You can put it same as Connected App Name.

-Contact Email: Same as your log in email.

Next Section would be API (Enable OAuth Settings)

Check the checkbox labelled “Enable OAuth Setttings”

For Callback URL, put any dummy web address starting with https. Ex: https://mydummysite.com

In Selected OAuth Scopes multiselect, choose Access and manage your data(api),and add it to Selected OAuth Scopes empty multi select by pressing Add button.

Leave it checked or check the checkbox labeled “Require Secret for Web Server Flow

Hit save button below.

You will see a message:”Allow from 2-10 minutes for your changes to take effect on the server before using the connected app.” Press continue button, and you will be presented of the Settings page for our newly created app. Note down those two fields under API (Enable OAuth Settings), Consumer Key and Consumer Secret.

Next we need to first reset our security token, and note it down. First head to Your Name at top right, then My Settings, then Personal item to the left, then click on Reset My Security Token, and click on the button with name “Reset Security Token”.

The new security token will be sent to your email. Write it down too.

At this point, we have everything we need to write REST call using c# to create a new case.

The response after successful oAuth authentication and authorization at Salesforce should look something like that:

REST_user_authenticated

Our REST app will have to be divided into two parts. First part will call one Salesforce endpoint to authenticate and authorize ourselves. The next part, will call another REST endpoint POST method to create new case.

After you create a new Case in Salesforce using REST, the response should look something like that:

case_created_response

If you noticed, there is a word “Created”

Please note, when using REST HTTP calls to create a new case in Salesforce, we are not using any downloadable separate libraries from Salesforce, or any SDKs, or from anywhere else.It is only HTTP, C#, https endpoints on Salesforce servers, and that’s it. This is very important to remember because the next way, as this posts Title says, will deal with WSDL and SOAP.

Also you will have to install DeveloperForce.Force and Newtonsoft.Json packages from NuGet, in order to create a new Case:

DeveloperForce_NewtonsoftJson

After you create a case, and log in to your developer, for example, Salesforce account, you should see the following cases below as such:

salesforce_new_cases

2

The second way to create a case in Salesforce is by using traditional web services. The only difference is that Salesforce rules say that you need to download particular WSDL, depending what API you want to work with, and integrate it in your development environment, which is Visual Studio and C#. In our case we need the enterprise WSDL.

In our sandbox environment, under our company, let’s click on setup beside our login name, and in Quick Find/Search, type api. Below, under Build section, there is Develop section, and beneath it is API link. Click on it, and a page win title API WSDL is opened to the right with the following WSDLs:

api-selections-list

-Enterprise WSDL

-Partner WSDL

-Apex WSDL

-Metadata WSDL

-Tooling WSDL

-Delegated Authentication WSDL

-Client Certificate

-Enterprise Package Version Settings

-Partner Package Version Settings

Select Enterprise WSDL and beneath it click on Generate Enterprise WSDL.

You will see in the next screen that you can choose the API versions that you want .We will keep the default, which is the latest ones. Click generate.

API-SF-SELECT

This will download or open an XML file. Save it in accessible directory, we will add it as service reference to our WSDLCreateSfCase project.

After you add this reference, you will be able to generate new cases on developer Salesforce account as well.

After successful login you should see something like this:

wsdl_login_success

And, after creating a case, using WSDL, you should see something like this:

wsdl_case_creation_success

Thank you

Distribution Outlet App using Database First Approach

This project is also vailable on GitHub:

https://github.com/thoughtsonprogramming/MyDistributionOutlet

I enjoy creating databases, it trains my analytical part of my mind. Besides, when creating it working with someone else, there is a lot of room for improvement, from hearing someone else’s opinion, because business rules are set there, at least the most of them.

I also used to work in a distribution warehouse center, where big trucks would come in, products in a boxes unloaded with help of workers and put on wooden square plates,those plates put on Jigger Lifts or a Forklift and moved to their correct location temporary storage in the warehouse.

Jigger Lift

JiggerLift

Forklift

forklift-truck

Then, workers would make new carton boxes for new orders, that will be delivered to different shops and outlets, label them based on clients name,and put in a line.

Carton Boxes

cartonbox

In other place in the same warehouse, a special space is dedicated for ORDERS. Orders are printed on paper by warehouse manager, with different product names and quantity per order and per client perhaps, on a single sheet.And workers would go and get all the boxes needed for a single order from different sections of the warehouse, and bring them there.

Then those boxes, per order, would be distributed to the boxes that are in  a long line.

Once the box in that line is full, it is closed, sealed with regular scotch paper, and again put on a wooden plate for order sending. Now I am intentionally not using word shipment here not to confuse you,  developer and programmer, with something else. I will tell you later why.

As you might have guessed, inventory management systems are so many here and there, and almost every store who has a warehouse has some kind of tracking mechanism for his merchandise.

This particular example is not IMS ( inventory management system ). It is more like general distribution warehouse outlet software program, that warehouse managers, and their co-managers use to track, label, and instantly count products that come in from supplier, and at the same day maybe, leave the warehouse.

I am going to use SHIPPING, only related to SUPPLIER’s coming products to our warehouse, because some may confuse shipment with shipments to the client. In fact some use shipping referencing shipping to clients their orders, but we are not going to do this because later it gets very confusing what is SHIPPING and what is SHIPMENT, the definition of those words I mean.

We will use Database First approach, because this design is available to our disposal, it is convenient for our situation, it is necessary actually for our warehouse distribution outlet because in this environment, there are many nuances that are better off handled from the beginning in the database logic.

We will do the heavy thinking in the database layer, design our logical and real database, clarify all definitions, set our own definitions and standards, and then do the simple and light front end design.

So we are going to separate our app into three layers, back end, which is database design and implementation, front end, which is a web page, bootstrap built from beginning to accommodate of-course mobiles of all sorts, as well as browsers of all sorts, and then we are going to jump a little bit into our middle layer implementation which handles such logic as copying a product, adding new product and so on.

In short this is what is going to happen:

SUPPLIER is going to send us a SHIPMENTof the products that we need

Our distribution warehouse app is going to store all our products, label them, name them, order them and copy them,because copying helps us later on creating new or similar products by simply selecting from existing copied one and configuring it a bit.

Our clients are going to make ORDERS . We will make our order and send it to client, making revenue and selling products.

To start designing our database, I think it is a good idea to keep up with the chain. At least this helps me not to forget where our database objects are.

If we keep up with the chain, then we can save a lot of time designing this, finish quickly, and hand it to the client ASAP.

Ok. Chain !

Let’s go !

We will begin with SUPPLIER. I will not bore you with details, as you might have guessed, all tables have IDs, Primary Keys,  and essential columns that describe and relate ONLY to SUPPLIER object and definition.

So, we will need those columns:


supplierID bigint
supplierName nvarchar(MAX)
countryID bigint
supplierProvince nvarchar(MAX)
supplierCity nvarchar(MAX)
supplierPostalCode nvarchar(50)
supplierTelephone nvarchar(MAX)
supplierFax nvarchar(MAX)
supplierEmail nvarchar(MAX)
supplierStreet nvarchar(MAX)
supplierUnitNumber nvarchar(MAX)

As you can see, we put only nouns that describe SUPPLIER, his country, fax number, tel number, his address. Country ID is foreign key to Country table, which in our chain we will create next.

So, the Country table and object which we will use to refer our supplier to , or in other words, one country can have many suppliers, or there could be many suppliers from one single country. Therefore, we need to put one to many relationship from COUNTRY table to SUPPLIER table. Therefore, our SUPPLIER table will have foreign key countryID relating to countryID Primary Key in COUNTRY table. Here is COUNTRY table:


countryID bigint
countryName nvarchar(MAX)
countryAbbreviation nvarchar(MAX)
countryFlag nvarchar(MAX)

So below are our two tables that we have just created: SUPPLIER and COUNTRY. We need this to see how to create relationship one to many from COUNTRY to SUPPLIER:

SUPPLIER-TBL

COUNTRY-TBL

To create one to many relationships in SSMS from COUNTRY to SUPPLIER , always start with Foreign Key table, which is SUPPLIER in our case. Right click on it, go to Design, in the white space to the right, right click, and you will see those options below, click on relationships, as shown below:

RelationshipsWindow01

Once you click on relationships, we see the following window in SSMS:

RelationshipsWindow02

Please click on Add to the left, a new relationship is added, then to the left click on ellipsis next to Tables And Columns Specific to see this window, to configure foreign key column(many) and primary key column (one)  in one to many relationship.

RelationshipsWindow03

Click OK. So, as you can see, we have configured one to many relationship from COUNTRY table to SUPPLIER table. And so on . I am not going to repeat this again because we will do the same for other tables in our Chain of tables.

It is also very important to normalize our database to first normal form, or at least close to first normal form, meaning that we need to eliminate repeating values in rows completely. It is not impossible if you try hard enough.

Then, we create another table called SHIPMENT.It will hold products that our SUPPLIER sends us. Remember, as mentioned earlier, SHIPMENT in our case is NOT orders to our clients, but it is the shipment that our supplier sends us with products that we asked. We will refer to shipment to our clients as ORDER, inside our middle tier layer later on.

Shipment Table:


shipmentID bigint
countryID bigint
shipmentCity nvarchar(MAX)
shipmentProvince nvarchar(MAX)

Now, to achieve first normal form between SHIPMENT and SUPPLIER, we need to create new table, intermediate, between SHIPMENT and SUPPLIER called SHIPMENT_SUPPLIER. And, of-course, as you may have guessed, we will add relations to join those two tables, on our third table. So, we will add one to many relationship from SUPPLIER to SHIPMENT_SUPPLIER and from SHIPMENT to SHIPMENT_SUPPLIER, where :

  1. SUPPLIER  PK  -> SHIPMENT_SUPPLIER FK
  2. SHIPMENT PK -> SHIPMENT_SUPPLIER FK

Now we are done joining shipment to supplier.

Next, we need details about our shipment, like shipment date , shipment arrival date and so on. So we will add new table that describes our shipment in more detail. We will call this table SHIPMENT_DETAIL:


shipmentDetailsID bigint
shipmentID bigint
shipmentDate datetime
shipmentEstimatedArrivalDate datetime
shipmentArrivalDate datetime
shipmentMethod nvarchar(MAX)
numberOfProducts bigint

And we need to add one to many relationships from SHIPMENT table to SHIPMENT_DETAIL table

  1. SHIPMENT PK -> SHIPMENT_DETAIL FK

Now this is our first part of our database, or first general section if you wish, that deals with suppliers shipments and its details, and keeping track of them.

The second part of our database design, will have to do of-course with PRODUCT, that is stored in our distribution warehouse on shelves, rotated around here  and there, assembled and packed in cardboard boxes, and then sent to clients and their ORDERS.

Let’s create PRODUCT table, simple table with three columns. The details of PRODUCT we will separate to another table called PRODUCT_DESCRIPTION and the details of each product like size, price and weight yet go into third table called PRODUCT_DESCRIPTION_DETAIL.As you can see this is also a small CHAIN of three tables.

So, to summarize our second part of our database design, we need three tables that describe PRODUCT:

  1. PRODUCT
  2. PRODUCT_DESCRIPTION
  3. PRODUCT_DESCRIPTION_DETAIL

PRODUCT:


productID bigint
productName nvarchar(MAX)
productSerialNumber nvarchar(MAX)

PRODUCT_DESCRIPTION:


productDescriptionID bigint
productID bigint
productDescLong nvarchar(MAX)
prodctCopied bit

Now add one to many relationship from PRODUCT to PRODUCT_DESCRIPTION on corresponding productID fields in both tables. They do not need to be named the same, as long as they have exactly the same data type.

  1. PRODUCT  PK on productID  (one)  -> PRODUCT_DESCRIPTION  FK on productID  (many)

Now add third table:

PRODUCT_DESCRIPTION_DETAIL:


productDescriptionDetailsID bigint
productDescriptionID bigint
productColor nvarchar(MAX)
productWeight nvarchar(MAX)
productSize nvarchar(MAX)
productPrice nvarchar(MAX)

And add one to many relationship in our CHAIN from PRODUCT_DESCRIPTION to PRODUCT_DESCRIPTION_DETAIL:

  1. PRODUCT_DESCRIPTION on productDescriptionID PK (one)  –> PRODUCT_DESCRIPTION_DETAIL  on  productDescriptionID FK (many)

So the second part is done. But we are missing something. Those two parts, the SUPPLIER and SHIPMENT part , and PRODUCT part are not linked. They are separate clouds in the sky, floating, not related in any way.

So we will relate them by creating new table called SHIPMENT_PRODUCT_DETAIL, which will join the two sections through, and using two tables from each section corresponding:

  1. SHIPMENT_DETAIL (SECTION ONE)
  2. PRODUCT (SECTION TWO)

So here is the table SHIPMENT_PRODUCT_DETAIL:


shipmentProductDetailsID bigint
shipmentDetailsID bigint
productID bigint

And now we create two relationshipsto those two tables:

  1. SHIPMENT_DETAIL   on  PK  shipmentDetailsID(one)  –> SHIPMENT_PRODUCT_DETAIL  on  FK shipmentDetailsID (many)
  2. PRODUCT  on PK  productID (one) –> SHIPMENT_PRODUCT_DETAIL  on FK productID (many)

After finishing our third part, we review everything, all database logic, and we see that something is missing.

What is missing is BOXING DETAILS, and how much of each product boxes has arrived, and how much tiny boxes inside big boxes has arrived, and how many pieces are there in each of these sub -boxes.

Now, to add all those details to PRODUCT table would not be fair, because logically, it is shipment related matter, but at the same time,it is also product related matter.

So to settle this dispute, we create new table called: PRODUCT_BOXING_DETAILS and create one to many relationship from our last created table: SHIPMENT_PRODUCT_DETAIL  which joins our so called two database design theoretical sections.

PRODUCT_BOXING_DETAILS:


productBoxingDetailsID bigint
shipmentProductDetailsID bigint
numberOfBoxes bigint
numberOfPacks bigint
numberOfProducts bigint
numberOfProductsPerPack bigint
numberOfPacksPerBox bigint

And create one to many relationship from SHIPMENT_PRODUCT_DETAIL   to PRODUCT_BOXING_DETAILS

  1. SHIPMENT_PRODUCT_DETAIL on shipmentProductDetailsID  PK (one)  –> PRODUCT_BOXING_DETAILS on shipmentProductDetailsID  FK (many)

As you can see, all those design issues happen in theory, because database thinks otherwise. Of-course, one can design his database in whatever way he wants.

Also, it is a good idea to set NON-CLUSTERED indexes on columns that we might use a lot. This will help speed up our SELECT Queries and BULK SELECT. You can as well combine CLUSTERED with NON-CLUSTERED indexes, depending on your needs of-course. There are also third party open source plugins for SSMS that let you do execution plans and at the end literally suggest what INDEX with what columns and what options to add, which will speed your query pretty much visible even to the naked eye, which is amazing because everyone is obsessed with speeding up his SQL queries.Tell me otherwise.

Below is the image of our created Distribution Outlet Database .

Distribution-DB-Diagram

And SQL for this database is below, I named this database WH01:


USE [WH01]
GO
/****** Object: Table [dbo].[COUNTRY] Script Date: 2017-02-05 16:51:04 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[COUNTRY](
[countryID] [bigint] IDENTITY(1,10) NOT NULL,
[countryName] [nvarchar](max) NOT NULL,
[countryAbbreviation] [nvarchar](max) NOT NULL,
[countryFlag] [nvarchar](max) NOT NULL,
CONSTRAINT [PK_COUNTRY] PRIMARY KEY CLUSTERED
(
[countryID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
/****** Object: Table [dbo].[PRODUCT] Script Date: 2017-02-05 16:51:04 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[PRODUCT](
[productID] [bigint] IDENTITY(1,10) NOT NULL,
[productName] [nvarchar](max) NOT NULL,
[productSerialNumber] [nvarchar](max) NOT NULL,
CONSTRAINT [PK_PRODUCT] PRIMARY KEY CLUSTERED
(
[productID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
/****** Object: Table [dbo].[PRODUCT_BOXING_DETAILS] Script Date: 2017-02-05 16:51:04 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[PRODUCT_BOXING_DETAILS](
[productBoxingDetailsID] [bigint] IDENTITY(1,10) NOT NULL,
[shipmentProductDetailsID] [bigint] NOT NULL,
[numberOfBoxes] [bigint] NOT NULL,
[numberOfPacks] [bigint] NOT NULL,
[numberOfProducts] [bigint] NOT NULL,
[numberOfProductsPerPack] [bigint] NOT NULL,
[numberOfPacksPerBox] [bigint] NOT NULL,
CONSTRAINT [PK_PRODUCT_BOXING_DETAILS] PRIMARY KEY CLUSTERED
(
[productBoxingDetailsID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[PRODUCT_DESCRIPTION] Script Date: 2017-02-05 16:51:04 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[PRODUCT_DESCRIPTION](
[productDescriptionID] [bigint] IDENTITY(1,10) NOT NULL,
[productID] [bigint] NOT NULL,
[productDescLong] [nvarchar](max) NOT NULL,
[prodctCopied] [bit] NOT NULL,
CONSTRAINT [PK_PRODUCT_DESCRIPTION] PRIMARY KEY CLUSTERED
(
[productDescriptionID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
/****** Object: Table [dbo].[PRODUCT_DESCRIPTION_DETAIL] Script Date: 2017-02-05 16:51:04 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[PRODUCT_DESCRIPTION_DETAIL](
[productDescriptionDetailsID] [bigint] IDENTITY(1,10) NOT NULL,
[productDescriptionID] [bigint] NOT NULL,
[productColor] [nvarchar](max) NOT NULL,
[productWeight] [nvarchar](max) NOT NULL,
[productSize] [nvarchar](max) NOT NULL,
[productPrice] [nvarchar](max) NOT NULL,
CONSTRAINT [PK_PRODUCT_DESCRIPTION_DETAILS] PRIMARY KEY CLUSTERED
(
[productDescriptionDetailsID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
/****** Object: Table [dbo].[SHIPMENT] Script Date: 2017-02-05 16:51:04 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[SHIPMENT](
[shipmentID] [bigint] IDENTITY(1,10) NOT NULL,
[countryID] [bigint] NULL,
[shipmentCity] [nvarchar](max) NULL,
[shipmentProvince] [nvarchar](max) NULL,
CONSTRAINT [PK_SHIPMENT] PRIMARY KEY CLUSTERED
(
[shipmentID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
/****** Object: Table [dbo].[SHIPMENT_DETAIL] Script Date: 2017-02-05 16:51:04 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[SHIPMENT_DETAIL](
[shipmentDetailsID] [bigint] IDENTITY(1,10) NOT NULL,
[shipmentID] [bigint] NOT NULL,
[shipmentDate] [datetime] NULL,
[shipmentEstimatedArrivalDate] [datetime] NULL,
[shipmentArrivalDate] [datetime] NULL,
[shipmentMethod] [nvarchar](max) NULL,
[numberOfProducts] [bigint] NULL,
CONSTRAINT [PK_SHIPMENT_DETAILS] PRIMARY KEY CLUSTERED
(
[shipmentDetailsID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
/****** Object: Table [dbo].[SHIPMENT_PRODUCT_DETAIL] Script Date: 2017-02-05 16:51:04 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[SHIPMENT_PRODUCT_DETAIL](
[shipmentProductDetailsID] [bigint] IDENTITY(1,10) NOT NULL,
[shipmentDetailsID] [bigint] NULL,
[productID] [bigint] NULL,
CONSTRAINT [PK_SHIPMENT_PRODUCT_DETAILS] PRIMARY KEY CLUSTERED
(
[shipmentProductDetailsID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[SHIPMENT_SUPPLIER] Script Date: 2017-02-05 16:51:04 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[SHIPMENT_SUPPLIER](
[shipmentSupplierID] [bigint] IDENTITY(1,10) NOT NULL,
[shipmentID] [bigint] NOT NULL,
[supplierID] [bigint] NOT NULL,
CONSTRAINT [PK_SHIPMENT_SUPPLIER] PRIMARY KEY CLUSTERED
(
[shipmentSupplierID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[SUPPLIER] Script Date: 2017-02-05 16:51:04 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[SUPPLIER](
[supplierID] [bigint] IDENTITY(1,10) NOT NULL,
[supplierName] [nvarchar](max) NOT NULL,
[countryID] [bigint] NOT NULL,
[supplierProvince] [nvarchar](max) NOT NULL,
[supplierCity] [nvarchar](max) NOT NULL,
[supplierPostalCode] [nvarchar](50) NOT NULL,
[supplierTelephone] [nvarchar](max) NOT NULL,
[supplierFax] [nvarchar](max) NOT NULL,
[supplierEmail] [nvarchar](max) NOT NULL,
[supplierStreet] [nvarchar](max) NOT NULL,
[supplierUnitNumber] [nvarchar](max) NOT NULL,
CONSTRAINT [PK_SUPPLIER] PRIMARY KEY CLUSTERED
(
[supplierID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[PRODUCT_BOXING_DETAILS] WITH CHECK ADD CONSTRAINT [FK_PRODUCT_BOXING_DETAILS_SHIPMENT_PRODUCT_DETAIL] FOREIGN KEY([shipmentProductDetailsID])
REFERENCES [dbo].[SHIPMENT_PRODUCT_DETAIL] ([shipmentProductDetailsID])
ON UPDATE CASCADE
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[PRODUCT_BOXING_DETAILS] CHECK CONSTRAINT [FK_PRODUCT_BOXING_DETAILS_SHIPMENT_PRODUCT_DETAIL]
GO
ALTER TABLE [dbo].[PRODUCT_DESCRIPTION] WITH CHECK ADD CONSTRAINT [FK_PRODUCT_DESCRIPTION_PRODUCT] FOREIGN KEY([productID])
REFERENCES [dbo].[PRODUCT] ([productID])
GO
ALTER TABLE [dbo].[PRODUCT_DESCRIPTION] CHECK CONSTRAINT [FK_PRODUCT_DESCRIPTION_PRODUCT]
GO
ALTER TABLE [dbo].[PRODUCT_DESCRIPTION_DETAIL] WITH CHECK ADD CONSTRAINT [FK_PRODUCT_DESCRIPTION_DETAIL_PRODUCT_DESCRIPTION] FOREIGN KEY([productDescriptionID])
REFERENCES [dbo].[PRODUCT_DESCRIPTION] ([productDescriptionID])
GO
ALTER TABLE [dbo].[PRODUCT_DESCRIPTION_DETAIL] CHECK CONSTRAINT [FK_PRODUCT_DESCRIPTION_DETAIL_PRODUCT_DESCRIPTION]
GO
ALTER TABLE [dbo].[SHIPMENT] WITH CHECK ADD CONSTRAINT [FK_SHIPMENT_COUNTRY] FOREIGN KEY([countryID])
REFERENCES [dbo].[COUNTRY] ([countryID])
GO
ALTER TABLE [dbo].[SHIPMENT] CHECK CONSTRAINT [FK_SHIPMENT_COUNTRY]
GO
ALTER TABLE [dbo].[SHIPMENT] WITH CHECK ADD CONSTRAINT [FK_SHIPMENT_SHIPMENT] FOREIGN KEY([shipmentID])
REFERENCES [dbo].[SHIPMENT] ([shipmentID])
GO
ALTER TABLE [dbo].[SHIPMENT] CHECK CONSTRAINT [FK_SHIPMENT_SHIPMENT]
GO
ALTER TABLE [dbo].[SHIPMENT_DETAIL] WITH CHECK ADD CONSTRAINT [FK_SHIPMENT_DETAIL_SHIPMENT] FOREIGN KEY([shipmentID])
REFERENCES [dbo].[SHIPMENT] ([shipmentID])
ON UPDATE CASCADE
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[SHIPMENT_DETAIL] CHECK CONSTRAINT [FK_SHIPMENT_DETAIL_SHIPMENT]
GO
ALTER TABLE [dbo].[SHIPMENT_PRODUCT_DETAIL] WITH CHECK ADD CONSTRAINT [FK_SHIPMENT_PRODUCT_DETAIL_PRODUCT] FOREIGN KEY([productID])
REFERENCES [dbo].[PRODUCT] ([productID])
ON UPDATE CASCADE
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[SHIPMENT_PRODUCT_DETAIL] CHECK CONSTRAINT [FK_SHIPMENT_PRODUCT_DETAIL_PRODUCT]
GO
ALTER TABLE [dbo].[SHIPMENT_PRODUCT_DETAIL] WITH CHECK ADD CONSTRAINT [FK_SHIPMENT_PRODUCT_DETAIL_SHIPMENT_DETAIL] FOREIGN KEY([shipmentDetailsID])
REFERENCES [dbo].[SHIPMENT_DETAIL] ([shipmentDetailsID])
ON UPDATE CASCADE
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[SHIPMENT_PRODUCT_DETAIL] CHECK CONSTRAINT [FK_SHIPMENT_PRODUCT_DETAIL_SHIPMENT_DETAIL]
GO
ALTER TABLE [dbo].[SHIPMENT_SUPPLIER] WITH CHECK ADD CONSTRAINT [FK_SHIPMENT_SUPPLIER_SHIPMENT] FOREIGN KEY([shipmentID])
REFERENCES [dbo].[SHIPMENT] ([shipmentID])
ON UPDATE CASCADE
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[SHIPMENT_SUPPLIER] CHECK CONSTRAINT [FK_SHIPMENT_SUPPLIER_SHIPMENT]
GO
ALTER TABLE [dbo].[SHIPMENT_SUPPLIER] WITH CHECK ADD CONSTRAINT [FK_SHIPMENT_SUPPLIER_SUPPLIER] FOREIGN KEY([supplierID])
REFERENCES [dbo].[SUPPLIER] ([supplierID])
ON UPDATE CASCADE
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[SHIPMENT_SUPPLIER] CHECK CONSTRAINT [FK_SHIPMENT_SUPPLIER_SUPPLIER]
GO
ALTER TABLE [dbo].[SUPPLIER] WITH CHECK ADD CONSTRAINT [FK_SUPPLIER_COUNTRY] FOREIGN KEY([countryID])
REFERENCES [dbo].[COUNTRY] ([countryID])
ON UPDATE CASCADE
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[SUPPLIER] CHECK CONSTRAINT [FK_SUPPLIER_COUNTRY]
GO
ALTER TABLE [dbo].[SUPPLIER] WITH CHECK ADD CONSTRAINT [FK_SUPPLIER_SUPPLIER] FOREIGN KEY([supplierID])
REFERENCES [dbo].[SUPPLIER] ([supplierID])
GO
ALTER TABLE [dbo].[SUPPLIER] CHECK CONSTRAINT [FK_SUPPLIER_SUPPLIER]
GO

view raw

WH01db.sql

hosted with ❤ by GitHub

Once database is done, we will go to Front End design, as mentioned in the beginning of this article, and create ASP.NET MVC project, add entity framework for our database, generate some test data,create nice cross device and cross browser design, and run our project first page.

We will use Bootstrap Cards. Below is picture of bootstrap card, captured and edited from my mobile phone,since cards fit similar information on the same sheet called Card.

I am not surprised if we will start programming using phones in the future as well .

But first lets populate our database with some test data:

Test Data:

At the end of this post, a link to download Visual Studio Solution
will be provided. There a file named WH01testdata.sql inside the
project has SQL to add data to our database.

Using Visual Studio 2015, let’s create new ASP.NET MVC with  WebAPI support new project, name it MyDistributionOutlet,as shown in the image below:DistributionCreateProjectMVC

The next step is to add EntityFramework to our newly created WH01 database.

Now, of-course you can build data access layer within your client app or program that will use your database. It does not need to be Entity Framework.But for me it is easier to use LINQ to SQL because it saves time, security against SQL injection built in, like parameterized query and even better, direct conversion to IEnumerable and easy direct way of manipulating and working with IQueryable, and of course transaction like approach to objects with the ability to commit multiple ADD operations at the same time, for example, with single commit.

So it is convenient as you can see, that’s why I use it.

To be honest with you, I am only going to write first page of searching product, the logic behind it, and copying a product functionality, in case you want to add similar product later on.

Now remember, all this is taking place in the context of our Distribution Outlet and people who are going to be using it there or “actors “.

The rest of the code I will let you write it on your own, like making a new ORDER object, saving it, and preparing to print it for the end user.

You can use this code as you wish, it is open source and with no license at all.

I will put the download link for the code and working video in the end,as with my other blog posts.

We will use Visual Studio 2012. Just for a change. You can use any version, but the process of creation ASP.NET MVC WEB API project is a little bit different.

Please create new ASP.NET WEB API Project and name it MyDistributionOutlet.

In the Views –> Shared directory, in _Layout.cshtml page, in the header, add the following links:

  1. https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css
  2. <!– HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries –>
    <!– WARNING: Respond.js doesn’t work if you view the page via file:// –>
    <!–[if lt IE 9]>
    https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js
    https://oss.maxcdn.com/respond/1.4.2/respond.min.js
    <![endif]–>
  3. https://code.jquery.com/jquery-3.1.1.js
  4. https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js

And in the <body> tag, right after it,  and before @RenderBody() add our newly designed SEARCH and NAVIGATION Cross Device, mobile and desktop menu: by the way, I spent quite a while designing this menu because I did not want anything there, and also I wanted and needed text box to be the same size on desktop and also when zoomed on mobile viewport:

 


<nav role="navigation" class="navbar navbar-default">
<div class="navbar-header">
<button type="button" data-target="#navbarCollapse" data-toggle="collapse" class="navbar-toggle">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a href="/DOutlet/Index" class="navbar-brand">Outlet D1</a>
</div>
<form role="search" class="navbar-form navbar-left" action="/Home/Search" name="searchForm" method="post">
<div class="form-group" style="margin-left: 15px; margin-right: 15px;">
<div class="input-group">
<input type="text" class="form-control" id="searchDialog" name="searchDialog" placeholder="Search" style="height: 36px" />
<div class="input-group-addon">
<button type="submit">Search</button>
</div>
</div>
<div class="well well-sm">
<div class="checkbox">
<label>
<input type="checkbox" name="cbProduct">
PRODUCT
</label>
</div>
<div class="checkbox">
<label>
<input type="checkbox" name="cbShipment">
SHIPMENT
</label>
</div>
<div class="checkbox">
<label>
<input type="checkbox" name="cbGridview">
Grid view
</label>
</div>
</div>
</div>
</form>
<div id="navbarCollapse" class="collapse navbar-collapse">
<ul class="nav navbar-nav">
<li class="dropdown">
<a data-toggle="dropdown" class="dropdown-toggle" href="#">Options<b class="caret"></b></a>
<ul role="menu" class="dropdown-menu">
<li><a href="/Home/NewProduct">Add new product</a></li>
<li><a href="/Home/NewShipment">Make new order</a></li>
<li><a href="#">Change product status</a></li>
<li class="divider"></li>
<li><a href="#">Trash</a></li>
</ul>
</li>
</ul>
<ul class="nav navbar-nav navbar-right">
<li><a href="#">Login</a></li>
</ul>
</div>
</nav>

view raw

WH01Nav.cshtml

hosted with ❤ by GitHub

Run the project in debug or release mode, and you will see those two images respectively:

DESKTOP VIEW:

doutletdesktopview

DESKTOP VIEW

MOBILE VIEW:

doutletmobileview

MOBILE IPHONE 8 VIEW

ZOOMED AND MINIMIZED BROWSER VIEWPORT VIEW:

doutletzoomedbrowserviewportview

BROWSER ZOOMED VIEW

As you can see, HTML5 Rocks !  Anyways, this is just front page when the app starts. You can also add login capability so the start screen would be your desired login screen, but it is up to you to add it or not.

This menu contains search box, where you can type product name and get all products by that or similar name, also, you can select Grid View, and instead of default Cards view you will have regular gridview with sub menu for supplier for each product.

The shipment or search by shipment is intentionally left for you on your choice to implement or not.

We will continue by our plan, if you scroll back to the beginning of this post, I said that we will first do database design, which is back end, then switch completely to front end design, (what we are doing now).So we will continue thinking in those terms and not jump around here and there.

Next in our front end design, we need a search page that will display our search results. Lets design it ! Here is the code for search page that we have  designed once the user clicks on search button, goes to search Action Result (ActionResult Search), which in its turn will return GridSearch view OR SearchProduct view respectively.

GridSearch view will display our search results in a grid view, and SearchProduct view will display our results in Cards layout. You can as well, mix,and create views with traditional web forms.

Since both of these views will contain our Model data as in Model View Controller, we need a model, we will create two classes in Models folder:

  1. SearchProductModel
  2. SearchProductModelList

doutlettwoclassesModels

And at the top of GridSearch view and at the top of SearchProduct.aspx page we will add the following:

<%@ Page Language=”C#” MasterPageFile=”~/Views/Shared/GeneralMaster.master” Inherits=”System.Web.Mvc.ViewPage<MyDistributionOutlet.Models.SearchProductModelList>” %>

Notice the inherits directive. This is how we pass the model to ASP.NET web page.

Also, we used master page, in order not to put search box on each page.

OK,Then we add new folder called EntityFramework, and add to it EDMX or ADO.NET Entity Data Model, and call it WH01Entities

Now switch to Controllers folder as per Model View Controller and in our HomeController add Search ActionResult, as shown below:


[HttpPost]
public ActionResult Search(FormCollection w_formCollection)
{
string searchTerm = w_formCollection["searchDialog"].ToString();
string byProduct = w_formCollection["cbProduct"] == null ? "off" : "on";
string byShipment = w_formCollection["cbShipment"] == null ? "off" : "on";
string gView = w_formCollection["cbGridview"] == null ? "off" : "on";
SearchProductModelList spml = new SearchProductModelList();
List<SearchProductModel> sp = new List<SearchProductModel>();
using (var searchPhrase = new WH01Entities())
{
if (byProduct == "on" && byShipment == "on")
{
}
//only search products
else if (byProduct == "on" && byShipment == "off")
{
var searchProductresults = from i in searchPhrase.PRODUCT_DESCRIPTION_DETAIL
where i.PRODUCT_DESCRIPTION.PRODUCT.productName.Contains(searchTerm)
select new SearchProductModel()
{
m_productId = i.PRODUCT_DESCRIPTION.PRODUCT.productID,
m_productName = i.PRODUCT_DESCRIPTION.PRODUCT.productName,
m_productSerialNumber = i.PRODUCT_DESCRIPTION.PRODUCT.productSerialNumber,
m_productDescriptionLong = i.PRODUCT_DESCRIPTION.productDescLong,
m_productPrice = i.productPrice,
m_productSize = i.productSize,
m_productColor = i.productColor,
m_productWeight = i.productWeight,
};
List<SearchProductModel> pmltemp = new List<SearchProductModel>();
pmltemp = searchProductresults.ToList();
foreach (var searchProduct in pmltemp)
{
var supplierForProductQuery = from s in searchPhrase.SHIPMENT_PRODUCT_DETAIL
where (s.productID == searchProduct.m_productId)
select new SearchProductModel()
{
m_supplierId = s.SHIPMENT_DETAIL.SHIPMENT.SHIPMENT_SUPPLIER.Select(sup => sup.SUPPLIER.supplierID).FirstOrDefault(),
m_supplierName = s.SHIPMENT_DETAIL.SHIPMENT.SHIPMENT_SUPPLIER.Select(sup => sup.SUPPLIER.supplierName).FirstOrDefault(),
m_supplierCity = s.SHIPMENT_DETAIL.SHIPMENT.SHIPMENT_SUPPLIER.Select(sup => sup.SUPPLIER.supplierCity).FirstOrDefault(),
m_supplierCountry = s.SHIPMENT_DETAIL.SHIPMENT.SHIPMENT_SUPPLIER.Select(sup => sup.SUPPLIER.COUNTRY.countryName).FirstOrDefault(),
m_supplierEmail = s.SHIPMENT_DETAIL.SHIPMENT.SHIPMENT_SUPPLIER.Select(sup => sup.SUPPLIER.supplierEmail).FirstOrDefault(),
m_supplierFax = s.SHIPMENT_DETAIL.SHIPMENT.SHIPMENT_SUPPLIER.Select(sup => sup.SUPPLIER.supplierFax).FirstOrDefault(),
m_supplierPostalCode = s.SHIPMENT_DETAIL.SHIPMENT.SHIPMENT_SUPPLIER.Select(sup => sup.SUPPLIER.supplierPostalCode).FirstOrDefault(),
m_supplierProvince = s.SHIPMENT_DETAIL.SHIPMENT.SHIPMENT_SUPPLIER.Select(sup => sup.SUPPLIER.supplierProvince).FirstOrDefault(),
m_supplierStreet = s.SHIPMENT_DETAIL.SHIPMENT.SHIPMENT_SUPPLIER.Select(sup => sup.SUPPLIER.supplierStreet).FirstOrDefault(),
m_supplierTelephone = s.SHIPMENT_DETAIL.SHIPMENT.SHIPMENT_SUPPLIER.Select(sup => sup.SUPPLIER.supplierTelephone).FirstOrDefault(),
m_supplierUnitNumber = s.SHIPMENT_DETAIL.SHIPMENT.SHIPMENT_SUPPLIER.Select(sup => sup.SUPPLIER.supplierUnitNumber).FirstOrDefault(),
m_shipmentCountry = s.SHIPMENT_DETAIL.SHIPMENT.COUNTRY.countryName,
m_shipmentCity = s.SHIPMENT_DETAIL.SHIPMENT.shipmentCity,
m_shipmentProvince = s.SHIPMENT_DETAIL.SHIPMENT.shipmentProvince,
m_shipmentDate = s.SHIPMENT_DETAIL.shipmentDate,
m_shipmentEstimatedArrivalDate = s.SHIPMENT_DETAIL.shipmentEstimatedArrivalDate,
m_shipmentArrivalDate = s.SHIPMENT_DETAIL.shipmentArrivalDate,
m_shipmentMethod = s.SHIPMENT_DETAIL.shipmentMethod,
m_shipmentNumberOfProductsGeneral = s.SHIPMENT_DETAIL.numberOfProducts,
m_shipmentNumberOfBoxes = searchPhrase.PRODUCT_BOXING_DETAILS.Where(d => d.shipmentProductDetailsID.Equals(s.shipmentProductDetailsID)).Select(xo => xo.numberOfBoxes).FirstOrDefault(),
m_shipmentNumberofPacks = searchPhrase.PRODUCT_BOXING_DETAILS.Where(d => d.shipmentProductDetailsID.Equals(s.shipmentProductDetailsID)).Select(xo => xo.numberOfPacks).FirstOrDefault(),
m_shipmentNumberOfProductsDetail = searchPhrase.PRODUCT_BOXING_DETAILS.Where(d => d.shipmentProductDetailsID.Equals(s.shipmentProductDetailsID)).Select(xo => xo.numberOfProducts).FirstOrDefault(),
m_shipmentNumberOfProductsPerPack = searchPhrase.PRODUCT_BOXING_DETAILS.Where(d => d.shipmentProductDetailsID.Equals(s.shipmentProductDetailsID)).Select(xo => xo.numberOfProductsPerPack).FirstOrDefault(),
m_shipmentNumberOfPacksPerBox = searchPhrase.PRODUCT_BOXING_DETAILS.Where(d => d.shipmentProductDetailsID.Equals(s.shipmentProductDetailsID)).Select(xo => xo.numberOfPacksPerBox).FirstOrDefault()
};
var query = supplierForProductQuery.ToList();
//foreach (var suppplierShipment in pmltemp2)
//{
searchProduct.m_supplierId = Convert.ToInt32(query.Select(m => m.m_supplierId).FirstOrDefault());
searchProduct.m_supplierName = query.Select(m => m.m_supplierName).FirstOrDefault();
searchProduct.m_supplierCity = query.Select(m => m.m_supplierCity).FirstOrDefault();
searchProduct.m_supplierCountry = query.Select(m => m.m_supplierCountry).FirstOrDefault();
searchProduct.m_supplierEmail = query.Select(m => m.m_supplierEmail).FirstOrDefault();
searchProduct.m_supplierFax = query.Select(m => m.m_supplierFax).FirstOrDefault();
searchProduct.m_supplierPostalCode = query.Select(m => m.m_supplierPostalCode).FirstOrDefault();
searchProduct.m_supplierProvince = query.Select(m => m.m_supplierProvince).FirstOrDefault();
searchProduct.m_supplierStreet = query.Select(m => m.m_supplierStreet).FirstOrDefault();
searchProduct.m_supplierTelephone = query.Select(m => m.m_supplierTelephone).FirstOrDefault();
searchProduct.m_supplierUnitNumber = query.Select(m => m.m_supplierUnitNumber).FirstOrDefault();
searchProduct.m_shipmentCountry = query.Select(m => m.m_shipmentCountry).FirstOrDefault();
searchProduct.m_shipmentCity = query.Select(m => m.m_shipmentCity).FirstOrDefault();
searchProduct.m_shipmentProvince = query.Select(m => m.m_shipmentProvince).FirstOrDefault();
searchProduct.m_shipmentDate = query.Select(m => m.m_shipmentDate).FirstOrDefault();
searchProduct.m_shipmentEstimatedArrivalDate = query.Select(m => m.m_shipmentEstimatedArrivalDate).FirstOrDefault();
searchProduct.m_shipmentArrivalDate = query.Select(m => m.m_shipmentArrivalDate).FirstOrDefault();
searchProduct.m_shipmentMethod = query.Select(m => m.m_shipmentMethod).FirstOrDefault();
searchProduct.m_shipmentNumberOfProductsGeneral = query.Select(m => m.m_shipmentNumberOfProductsGeneral).FirstOrDefault();
searchProduct.m_shipmentNumberOfBoxes = query.Select(m => m.m_shipmentNumberOfBoxes).FirstOrDefault();
searchProduct.m_shipmentNumberofPacks = query.Select(m => m.m_shipmentNumberofPacks).FirstOrDefault();
searchProduct.m_shipmentNumberOfProductsDetail = query.Select(m => m.m_shipmentNumberOfProductsDetail).FirstOrDefault();
searchProduct.m_shipmentNumberOfProductsPerPack = query.Select(m => m.m_shipmentNumberOfProductsPerPack).FirstOrDefault();
searchProduct.m_shipmentNumberOfPacksPerBox = query.Select(m => m.m_shipmentNumberOfPacksPerBox).FirstOrDefault();
sp.Add(searchProduct);
}
}
//only searchshipments
else if (byShipment == "on" && byProduct == "off")
{
}
spml.listofsearchproducts = sp;
return gView == "on" ? View("GridSearch", spml) : View("SearchProduct", spml);
}
}

Also, as I mentioned before, we will add functionality to COPY a product. Copying a product is nothing but adding a prodctCopied field in PRODUCT_DESCRIPTION  table.

We will add this code below in our HomeController:


public ActionResult CopyProduct(FormCollection s_formCollection)
{
var ModelToReturn = TempData["currModel"];
var prodId = Convert.ToInt32(s_formCollection["hprodid"]);
var UpdateContext = new EntityFramework.WH01Entities();
var quwry = from pdet in UpdateContext.PRODUCT_DESCRIPTION
where pdet.productID == prodId
select pdet;
foreach (PRODUCT_DESCRIPTION pd in quwry)
{
pd.prodctCopied = true;
}
UpdateContext.SaveChanges();
return View("SearchProduct", ModelToReturn);
}

The test data and database creation script is available inside the project folder once you download the solution, in test-data.sql  file and wh01-tables.sql respectively.

So once we do all this, and fill in some test data,we will see those pages:

DESKTOP CARDS DEFAULT RESULT (AFTER SEARCH CLICK)

doutlet_after_search_desktop

DESKTOP CARDS MOBILE RESULT (AFTER SEARCH CLICK)

doutlet_after_search_MOBILE

And card list continues down below if you scroll down.

And:

DESKTOP ZOOMED HTML 5 VIEW PORT:

doutlet_after_search_BROWSER_ZOOM_PORT

Also, if you scroll the list will go down and down, with the cards list that our search returned.

The GRID VIEW page:

doutlet_after_search_GRIDVIEW

It contains sub-report, and EDIT functionality on the same page.

Edit option:

doutlet_after_search_GRIDVIEW_EDIT

Thank you,

🙂