angular spa, asp.net core and authentication – Part 1, The Idea

An increasingly common Single Page Application (SPA) client with server API backend solution is angular 2/4 with asp.net core API.  I especially like the approach by Michal Dymel where he creates 2 projects, asp.net core server and angular client.  Each project has its own tool set: I use Visual Studio Community Edition for the asp.net core server, and use angular command line interface (CLI) and Visual Studio Code for the client.  The angular cli allows for quick scaffolding of client projects, as well as components, directives, services, classes, routing…   When the angular-cli is built (ng build), the files are copied to a destination set in the configuration, and in this case to asp.net core wwwroot folder.   Works great.

However, authentication and authorization continue to be road blocks for me.  I have a server and can write code, manage databases and do not want to offload to a Saas solution.  Maybe I’ll implement an Identity Server solution at some point, but for now I want quick, light-weight set-ups, that do not get in the way of the core applications I want to write.

When I used Asp.net MVC, the authentication solution was available out of the box – simple and easy.   Tokens (JWT) are now the rage for spa client-server, but with this I need to write all my own authentication/login in my client for each client app, and then get the JWT token, client claims to my api calls…

I have looked at microsoft javascript services, Single Page Application (SPA) templates. The team that creates this is lead by Steve Sanderson of KnockoutJs fame.   They combine asp.net core with a SPA using your choice of  various frontend frameworks (angular, react…)  I like the idea, but I like Dymel’s 2 project solution  better since the angular-cli tool chain is kept intact, and you can use the most up to date version of the client frameworks.   With Dymel’s 2 project solution, the client build output still ends up in the wwroot folder.

One possible modification to the 2 project solution could be running the client application inside of a MVC view (this is how the MS javascriptservices template is organized).  This would allow me to use MVC for the login/authentication, anti-forgery tokens, and create a backend MVC admin portal, while the remainder of the application is all angular client and asp.net core api.  These calls could all rely on the login, as could the authorization which could be role based instead of claims based.  There may still be a CSRF attack risk, and I might still need to work tokens into the mix.

We’ll see how this works.  I plan to blog my experience implementing this in my next post.

Advertisements
Posted in Uncategorized | 1 Comment

Alice in Wonder Server

Everything is more complicated.   Monthly, I log the changes in medicine, and change some of the same topics over and over again.  Cardiac arrest, for example,  has been honed from a cocktail of medications to high quality chest compressions and early defibrillation.

On the programming side there are the pain points with asp.net core and angular 2. But nowhere am I confused more than in server management.   When divided into overall categories (basic configuration, users and group policies, IIS), and taught in tidy lectures on the web, it seems to make sense.  But then seemingly simple things break, and despite all those lectures, I cannot solve the most basic of server issues.  I would be lost without  Rackspace support.

So, I have tried to improve the situation and make server admin less of an Alice in Wonderland experience.  Pluralsight does have hands-on courses in server management.  I have started working through some of these (e.g. this and this).

These courses too have their frustrations, and I’m logging my fixes for problems I ran across while completing the windows server with powershell course.

  • VMWare Player is free for non-commercial use and works well for these courses
  • Microsoft windows server evaluation ISOs are available for 3-6 month trials
    • Download the ISO for Windows Server 2012 R2, and use Datacenter with GUI
    • Download the ISO for Windows 10 Enterprise
  • On installing VMWare Virtual Machines
    • Choose will select ISO later (otherwise requires a MS product key)
    • Leave the VM directories as default (too hard to change)
    • Once Server is installed, install VMWare Tools
      • Change VM Settings CD/DVD from ISO file to autodetect
      • When small notification appears “Select what to do…”  Click
  • Connections between VM servers has several hiccoughs
    • Turn off Norton firewall when making the initial connection
    • In VM Settings, change Network Adapter from NAT to Bridged (Automatic)
      • Powershell update-help seems to need NAT to work
  • Specific to the Windows Server with Powershell Course
    • In Chapter 1, Demo: Configure DC1
      • Server Administrator login username is Administrator
      • When adding AD users, credentials = administrator@wiredbrain.priv
      • These same AD credential username (email) is used in Chapter 2
        • Add-Computer -DomainName wiredbrain.priv
        • Enter-PSSession
    • In Chapter 2, Demo: Enabling Remote Management was painful
      • Get-DhcpServer4Lease did not work from Client1 (but worked on DC1)
      • Took hours to get “Enter-PSSession”
        • Used the administrator@wiredbrain.priv for username
        • Disable firewall
        • DC1 and Client1 VM Network router set to Bridged
        • Set WinRM Service to Automatic and Start
  • Other Issues
    • VM domain seems to interfere with the house network including internet
      • Power off the VMs when not in use

 

 

Posted in Uncategorized

Angular 2 and Bootstrap 3: The Ugly Parts

I like Angular 2, especially with the CLI.  In contrast to various online criticism, the angular 2 structure feels right to me, especially for larger projects.

However, I have encountered friction, especially when combining with other javascript libraries, most notably those powering the dynamic features in Bootstrap 3 (bootstrap.js).   Ideally, I would use Angular-UI (which I used with Angular 1), but it is still in alpha development for Angular 2, and it is being designed to work with Bootstrap 4  (which is also in alpha).  Angular material might someday be a good option, but it too is still in early development.  There are other options (ngBootstrap), but I have not had time to try these.

Some Boostrap simple dynamic tasks, such as collapse, can be implemented easily in Angular (using ngIf and animation).  But most of the Bootstrap javascript I call from within Angular.   Here are a few of my notes on integrating the 2, if even a bit messy.

 

Button-Group – RadioButton in place of a checkBox

Private?

Public
Private

</div>

Popovers

startPopover(e:MouseEvent) {
var e$ = $(e.target);
e$.popover({container: ‘body’})
e$.popover(‘show’);
}
endPopover(e:MouseEvent) {
$(e.target).popover(‘hide’);
}

<a (mouseover)=”startPopover($event)” (mouseleave)=”endPopover($event)” role=”button” class=”popover-link” data-toggle=”popover” title=”” data-html=”true”
[attr.data-content]=”‘your content here'”
[attr.data-original-title]=”‘your title here'”></a>

 

 

Typescript Enums as options in select box

export enum Category { Info =1, Warning =2, Error =3};

public categoryKeys = Object.keys(Category).filter(Number);
public categoryEnum :typeof Category = Category;

 

Category

{{categoryEnum[key]}}

 

 

Slideshow

 

@light-gray: rgba(0,0,0,0.4);

#slideshow {

border:solid 1px gray;
border-radius:5px;
position: relative;
.carousel {
width: 100%;
margin: 0 auto; /* center carousel if other than 100% */
}

.slideOuter {
height: 650px;
margin-top:20px;
}

img {
height:500px;
margin:0 auto;
}

.carousel-indicators {
li {
border-color: @light-gray;
}

.active {
background-color: @light-gray;
}
}

// .carousel-caption {
// color:#333;
// }

.carousel-indicators
{
bottom: -50px;
}
.carousel-indicators li
{
display: inline-block;
width: 10px;
height: 10px;
margin: 1px;
text-indent: -999px;
cursor: pointer;
background-color: #000 \9;
background-color: rgba(0,0,0,0);
border: 1px solid @light-gray;
border-radius: 50%;
}
.carousel-indicators .active
{
width: 12px;
height: 12px;
margin: 0;
background-color: @light-gray;
}
.carousel-caption
{
position: absolute;
right: 0;
bottom: 0;
left: 0;
z-index: 10;
padding-top: 20px;
padding-bottom: 20px;
color: #fff;
text-align: center;
background: rgba(0,0,0,0.4);
h4 {color:#fff}
}

}
.carousel .carousel-control,.carousel .carousel-caption { visibility: hidden; }
.carousel:hover .carousel-control, .carousel:hover .carousel-caption { visibility: visible; }

 

import { Component, OnInit, Input } from ‘@angular/core’;
import { DataService } from ‘../../services/data.service’;
import { ActivatedRoute, Params } from ‘@angular/router’;

@Component({
selector: ‘app-slideshow’,
templateUrl: ‘./slideshow.component.html’,
styleUrls: [‘./slideshow.component.less’]
})
export class SlideshowComponent implements OnInit {
public imageN = 0;
public imageLinks = [];
public active = 1;
public pagePath = “”;

constructor(private dataService: DataService,
private route: ActivatedRoute) { }

ngOnInit() {

this.imageN= Number(params[‘imageId’])-1; //imageN is 1 based

this.navigation.currentImageId = this.imageN;

this.imageLinks = result.ImageLinks;

_(this.imageLinks).forEach(function (value, index) {
value.id = index;
})
//set the current imageLink to active
if (_.isNumber(this.imageN)) {
var len = this.imageLinks.length;

if (this.imageN >= 0 && this.imageN < len) {
this.imageLinks[this.imageN].active = true;
this.active = this.imageN;
}

}

$(‘#slideshow’).on(‘slid.bs.carousel’, (e)=>{
var currentIndex = $(‘.carousel-inner div.active’).index()+1;
});

}

}

Posted in Uncategorized

SSL and asp.net core

As I inch toward, working a working deployed app (asp.net core WebApi + IdentityServer + Angular2), another hurdle: SSL.  I have a certificate on the server, but what about the development machine.  When using IIS express in VS, this seems to magically work, but how to get this to work directly from Kestrel?

As has often been the case for me (mostly via pluralsight), Shawn Wildermuth to the rescue.  He describes the setup, including self-signed certificates here.  I saved this process out as a powershell script:

$exe = “C:\Program Files (x86)\Windows Kits\10\bin\x64\”;
cd $exe;
$filename = “e:\myCertName”;
$cmd1 = “.\makecert.exe -sv ” + $filename + “.pvk -n “”CN=myOrganization”” ” + $filename + “.cer -r”;
iex $cmd1;
$password = “myPassword”;
$cmd2 = “.\pvk2pfx.exe -pvk ” + $filename + “.pvk -spc ” + $filename + “.cer -pfx ” + $filename + “.pfx -pi ” + $password;
iex $cmd2;

 

But, now when I deploy, I do not want to use this certificate – I want the formal server certificate.  I had to therefore use a conditional debug block in the program.cs.  There is likely a better way.

 

 

            var host = new WebHostBuilder()
#if DEBUG
.UseKestrel(cfg=>cfg.UseHttps(
new X509Certificate2(“myCertName.pfx”, “myPassword”)))
#else
.UseKestrel()
#endif
.UseUrls(“https://localhost:5000&#8221;)
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.Build();

 

Then setting the application to accept SSL was relatively straight forward in IIS.  Under advanced settings, accepting either HTTP or HTTPS works.  The “SSL Settings” link in the main IIS panel, does not need to be changed – no need to check require SSL (this will break re-routing if it is checked).

Then, in the URL rewrite tab, reroute from HTTP to HTTPS as described here.

The web.config will appear as follows. If you use the forms to complete this on the server, then copy the resulting settings into the web config in Visual Studio, or all your settings will be over-written on publish.

 

<rewrite>
      <rules>
        <rule name="Http to https" stopProcessing="true">
          <match url="(.*)" />
          <conditions>
            <add input="{HTTPS}" pattern="off" />
          </conditions>
          <action type="Redirect" url="https://mywebsite.com{REQUEST_URI}"
                 redirectType="Found" />
        </rule>
      </rules>
    </rewrite>

 

 

 

 

 

 

 

 

Posted in Uncategorized

Testing 1,2,3 in asp.net core

Setting up testing in asp.net core 1.1 is not intuitive.  The tooling including the cli, has not yet caught up to the current version. In addition, a google search for mocking patterns (esp. entityframework) will lead astray.

  1. Check global.json for the solution folder.  Likely has two folders defined – src and test.  At least for now.  This global.json is likely to disappear from asp.net core in the near future (as of late 2016).
  2. Make sure all projects in solution directory are under one of  the global.json defined directories.  Do not confuse the solution folders in solution explorer (organization only) with the directories on disk, which is what globals.json refers.  The main projects will be in the “src” folder.  The test projects should be in the “test” folder.
  3.  Open the “test” directory in the command or powershell window.
  4. We are going to use the dotnet cli (command line interface) to create an xunit test project.  Currently Visual Studio does not create an asp.net core test project
  5. mkdir MyTestProject
  6. cd MyTestProject
  7. dotnet new -t xunittest
  8. dotnet restore
  9. “dotnet test” will run the test project from the command line and will run the default example test method.

This works fine if you want to “Assert.True(true);” which is what the sample test project will contain to start.  We probably want something more useful, such as testing one of the existing projects.  If that project, however, has been updated to dotnet 1.1, “adding a  reference” to that  project in the current XUnit setup will fail “incompatible versions”, since cli creates a asp.net core 1.0 test project.  Update the project.json.  This is what I have currently:

 

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable"
  },
  "dependencies": {
    "System.Runtime.Serialization.Primitives": "4.1.1",
    "xunit": "2.1.0",
    "dotnet-test-xunit": "2.2.0-preview2-build1029",
    "Microsoft.AspNetCore.TestHost": "1.1.0-*",
    "Microsoft.AspNetCore.Diagnostics": "1.1.0-*",
    "Microsoft.Extensions.Logging.Console": "1.1.0-*",
    "Microsoft.AspNetCore.Mvc": "1.1.0-*",
    "Microsoft.EntityFrameworkCore.InMemory": "1.1.0",
    "Moq": "4.6.38-alpha"
  },
  "testRunner": "xunit",
  "frameworks": {
    "netcoreapp1.1": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.1.0"
        }
      },
      "imports": [
        "dotnet5.4",
        "portable-net451+win8"
      ]
    }
  }
}

 

The next problem I ran into, was how to test a repository that is instantiated with a reference to a DBContext.  I first landed here, and spent more than an hour trying to get this work in asp.net core.  This turned out to be a wild goose chase.  I instead found a way to test with lightweight in memory objects.  The example below uses a DbContext that defines a DbSet of Groups (these are defined in the main project).  It also uses a User Repository based on ApplicationUser.  I have a custom method (FindDictionary) that returns a dictionary of the users.  The return is mocked in the example.

 

 

using Microsoft.EntityFrameworkCore;
using Moq;
using System.Collections.Generic;
using Xunit;

public class GroupRepositoryTests
 
 
{
   DbContextOptions<MyDbContext> _appOptions;
   Mock<IUserRepository> _userRepository;
 
 
    public GroupRepositoryTests()
    {
 
         _appOptions = new DbContextOptionsBuilder<MyDbContext>()
            .UseInMemoryDatabase(databaseName: "MyApi")
            .Options;
 
       _userRepository = new Mock<IUserRepository>();
 
        _userRepository.Setup(x => x.FindDictionary(It.IsAny<string[]>()))
            .Returns(new Dictionary<string, User> {
            { "joe@msn.com", new User { 
                 UserName = "joe@msn.com",
                 FirstName = "joe",
                 LastName = "shmoe"} }
        });
     }
 
    [Fact]
    public async void ShouldCreate() {
 
        using (var appContext = new MyDbContext(_appOptions)) {
 
            appContext.Add(new Group {
                 Name = "Group1",
                 Description = "An example group",
                 OwnerId = "joe@msn.com" });
            appContext.SaveChanges();
 
            var g = new GroupRepository(appContext, _userRepository.Object);
 
            var a = await g.GetAsync();
 
            Assert.True(a.Success);
 
       }
 
    }
}

 

Seemingly simple implementations (e.g. setting up testing for a repository) can eat up a good part of a day;  and that’s before any meaningful tests are written.

Posted in Uncategorized

IIS and the Event Log

The event log has a series of errors that I’m trying to work out.  Here are some assorted notes:

Rackspace was very helpful and recommended I tackle the WAS errors which are coming from the the asp.net clr.  But I can see no details.  I am offered the chance to open JIT compiler in Visual Studio – but I do not have VS installed on the server (need to turn off server-side debugging in web.config).  Noted that sChannel errors can often be ignored unless tied to other specific events at same time.    Recommended looking at Event Viewer Security for repeated failed login attempts (hack attempts).

Recommended using the event viewer log to isolate error times, and then review the IIS logs for errors at that time.   Also recommended failed request trace.

They also recommended decreasing my Appool recycle time (RegularTimeInterval in minutes) to about 12 hours (720 min) from the default of 29 hours (1740 min), and to   Reboot server monthly.

No recommendations for any good log analyzers, but I found the log parser which allows for sql statement queries on the log files via a console app or powershell.  There is also a Log Parser Studio which adds a GUI.  An example query: SELECT * FROM ‘[LOGFILEPATH]’ where time >=’11:58:00′ and time <= ’12:00:00′.   The LogFilePath is set before as a single file or collection of files.  Analysis can be for any number of file types from IISW3CLOG to csv files.

There is also Debug Diagnostic Tool which apparently takes some set-up. Have not tried this yet.  It apparently helps troubleshoot hangs, slow performance, memory leaks…  There is a summary on the basics of setting up debugging.


							
Posted in Uncategorized

Identity and the Client

Login has always seemed like an after thought.  As a solo developer, projects grow from shiny new, interesting features and technologies.  Identity, authentication (logon) and authorization (roles) are tacked on later.  Identity is boring.

Yet identity seems to be the critical foundation on which the rest of the application relies, and it is really, really difficult to get right (as evidenced by nearly daily reports of website hacking).  I have never written any mission critical applications (e.g. medical devices) or those requiring tight security (e.g. patient privacy, banking).  But I still need a reasonably secure login.  In the 2000-2010, I used asp built in asp identity solutions and later asp.net MVC.  They have instant, built-in support for identity databases in the framework, as well as external authentication (e.g. Google, Facebook, Microsoft).  There are even anti-forgery tokens, sterilization of input data (to prevent sql injection attacks) and other security measures.  Easy to use – click a checkbox or dropdown on the new MVC project, and it just works.  Magic.

But that is, I am told, not the 2016 way to do things, especially not for working with un-tethered javascript client frameworks (e.g. angular).   These clients should call apis on the server with token-based authentication and authorization.  Tokens are  hard.  Maybe not so hard if you use Auth0  which looks like a very nice solution.  But that is not the direction (or maybe better described as layer of indirection) I want to travel.

Server and client development in 2016 seems to be cluttered by utilities/services, that for a price, will replace programming challenges with magic boxes:  key, challenging functionality wrapped  in shiny packages with catchy names.  I send data in and get data out.  What goes on inside is mysterious.   Libraries from Nuget or NPM do this as well (e.g.  newtonsoft,  lodash), but at least with libraries I understand the basics of what occurs inside.

In contrast, cloud computing (e.g. Azure, AWS) despite all its great conveniences including spinning-up 1 or 50 servers throughout the world, is in large part composed of magic boxes.   For my needs, “owning” the whole server still makes sense.  And the costs of adding magic boxes, cloud computing, OAuth, Firebase… could quickly add-up.

So, in the last month I delved into identity server 4 (built on asp.net core).    In the recommended configuration, Identity Server is given its very own asp.net core project.  It can use MVC controller/views to login in a user and access their roles/claims.  Identity server defines clients with scopes (e.g. openId, Profile, “api”) as well as users.

What is incredible is that the primary developers,  Dominick Baier and Brock Allen have set-up an entire ecosystem for identity, along with  great tutorials, documentation and many samples and starter projects. I’ve worked through the tutorials and have used the Quickstarts starter project #6 – which in turn uses asp.net identity database and it works well.  Using external providers (e.g. Google, Microsoft, Facebook) also works, but I am still trying to reconcile username/password standard logins with multiple external providers.  In effect, how to keep a consistent set of claims for the same user who authenticates today with a username, tomorrow via Google, and the next day via Microsoft.  Kevin Dockx covers some approaches to this in his Pluralsight course.

You’ll want to work through the Identity Server tutorials first – the setup is not intuitive.   One project/asp.net core is the identityserver, a second is the api “client”, and the third is the javascript app or MVC app.  When a user needs to login in for the javascript or MVC app, they are transferred to identity server for login.  Once login occurs, a token is created to allow the end user javascript or MVC app access to the web api.

Testing in postman is fairly straight forward using this workflow.   You post to the [identityserver_URL]/connect/token with body data (x-www-form-urlencoded) with client_id, client_secret, grant_type, scope, username and password.  These are all defined in identityServer config except for username and password.  However the grant_type text is tricky.  With trial and error, I found “password” for resourceOwnerFlow and “client_credentials” for clientCredentialsFlow;  Then I found that valid grant types are listed at  “//localhost:5000/.well-known/openid-configuration”.  Once you post to the token service, an access token is returned and that can be copied into the api call header to allow authorization. The access token parts can be visualized by pasting it at the jwt.io site.

Everything worked well until I attempted deployment.  Then I became stuck on an error – which started as a 500 – internal server error.  Once I added logging to file (NLog Extensions – look for the resulting log files at c:\temp\nlog…), I found  IDX10803: Unable to obtain configuration; it could not open the /.well-known/openid-configuration.  This worked on the dev machine with localhost:5000, but on the server, I needed to use a full URL.  This was easy enough to configure  using appSettings.production.json, but it was not finding the file.  After spending hours, it turned out that I was trying to use //myUrl/identityServer and it would not work – the URL was not being found.  Instead, I needed to use  https or http//myUrl/IdentityServer.

One additional issue that also took me several hours to figure out.  The IdentityServer example projects use “services.AddMvcCore”, not “services.AddMvc”.  As I learned, AddMvcCore is a barebones subset  of the framework.  This worked fine until I started to add additional functionality, such as swashbuckler.swagger, a great Api helper utility, and while following examples, I could not get it to work.  Finally, once I changed the configuration in startup to “services.AddMvc”,  all worked.

There are multiple road blocks to understanding IdentityServer,  OAuth2 and OpenID in Asp.Net.  As of November 2016, Pluralsight has 2 courses.  Both use Asp.net (not core) and Identity Server 3 (not 4), but the overall concepts, flows/grants, clients, scopes, users – are the same as for the newer versions. I started by watching the “OpenId and Oauth2 Strategies for Angular and Asp.Net” – and this is a very thorough and in depth but quite overwhelming; rather than an introduction, it is more of a “everything you ever wanted to know about IdentityServer”.    I then watched “Using Oauth to Secure Your Asp.Net Api” which was more geared at an introductory level and easier for me to get my head around.   In retrospect, I would watch this first before the other.  He does however recommend using ResourceOwnerPassword for Xamarin Mobile apps authenticating with identity server and this may be insecure due to transfer of client username and password; I think authentication flow may be better for this.  ResourceFlow appears to be ok for api since the API is on the same server as the Identity Server.

It took me a while to understand that access-tokens are just for access.  They do not include user claims (e.g. username, email…).  To get user claims, you need an id-token or you can tack on claims to the access token, but this requires the ResourceOwner flow.   In addition, understanding how roles fit into the new claims world was not intuitive, but is explained here.

It took me days to figure out how to use Asp.net core identity (and EF/SQL database) with Identity Server.  Most of this is well layed out here.  But what took me the longest time, was in getting the username (user email) and role (e.g. admin) to be passed as claims to the client Api.  I created an Identity Server client with AllowedGrantType = “ResourceOwnerPasswordAndClientCredentials” flow to do this.   Then create a custom scope that uses the following Claims (as described here):

                 Claims = new List<ScopeClaim>
                        {
                            new ScopeClaim("name"),
                            new ScopeClaim("role")
                        }

As an alternative, you can add “IncludeAllClaimsForUser = true” to the custom scope, but it adds additional claims I do not need.

In the Api, I added the follow to Startup.cs Configure.  This stops default Microsoft jwt mapping that tends to mess a few things up:

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

 

I also added the following lines in the Api to the IdentityServer setup section in Configure:

app.UseIdentityServerAuthentication(new IdentityServerAuthenticationOptions
            {
                Authority = authority,  
                ScopeName = "myscope",
 
                NameClaimType = System.Security.Claims.ClaimTypes.Name,
                RoleClaimType = System.Security.Claims.ClaimTypes.Role
 
            });

 

Now that the Api can identify the username, I can use that in database fields to record the author of entries.  I created a separate UserRepository within the Api, that calls the asp.net identity database to obtain additional user info and can be joined to the other tables.  This is a bit clunky however.  When the api held both a “notes” table and a “user table”, note rows had a author foreign key that pointed to a user in the user table.  Therefore a query for notes could automatically include information about the user.   However, with the user table in another database, there needs to be 2 queries and a join.

As an alternative, I could create a SqlServer View inside the notes database, from the table in the User database (using specific column names instead of the *):

CREATE VIEW [dbo].[vUserMirror]
	AS SELECT * FROM [users].dbo.[aspnetusers];

The users view could then be linked via foreign key to the notes, and  a notes query could once again include user info.   The tough part with this is how to create this automatically with migrations.   Another option is to create a data project that holds all the data files including identity.  Then client projects could use the data project and each could extend ApplicationUserDbContext with their own database implementations.  In this way different client projects could share the same central identity database.  However, it would seem fragile as more tables were added for different client projects – the database would grow, migrations for one client project might break the data for another project.   Still need to work with this.  I have searched for examples online for how to solve this and have not found a solution.

The second problem with this is that now I have two DbContexts inside the project, and migrations are no long as straight forward.   Add-migration inside of Package Manager Console will give a cryptic error: “Unrecognized option ‘–build-base-path'”.  However, if the full non-powershell command is issued (“dotnet ef migrations add mymigration”), the error will now be “More than one DbContext was found. Specify which one to use. Use the ‘-Context’ parameter for PowerShell commands and the ‘–context’ parameter for dotnet commands.”.  So, with a project containing more than one DbContexts, use:

 

 

  dotnet ef migrations add myMigrattion -c myDbContext
  dotnet ef database update -c myDbContext


I’ve been watching as Identity Server 4 rapidly nears a full 1.0 release (as of early November it is in RC3).  I am amazed at the productivity and efficiency of Dominick Baier and Brock Allen, and the others working with them.  It seems every time I access their github repo, they  made their last changes, just hours before.  Their documentation  pages are stellar as are their samples.    I wish their London SDD Conference Workshop allowed for online subscription viewing.

 

 

 

Posted in Uncategorized