Testing 1,2,3 in asp.net core

Setting up testing in asp.net core 1.1 is not intuitive.  The tooling including the cli, has not yet caught up to the current version. In addition, a google search for mocking patterns (esp. entityframework) will lead astray.

  1. Check global.json for the solution folder.  Likely has two folders defined – src and test.  At least for now.  This global.json is likely to disappear from asp.net core in the near future (as of late 2016).
  2. Make sure all projects in solution directory are under one of  the global.json defined directories.  Do not confuse the solution folders in solution explorer (organization only) with the directories on disk, which is what globals.json refers.  The main projects will be in the “src” folder.  The test projects should be in the “test” folder.
  3.  Open the “test” directory in the command or powershell window.
  4. We are going to use the dotnet cli (command line interface) to create an xunit test project.  Currently Visual Studio does not create an asp.net core test project
  5. mkdir MyTestProject
  6. cd MyTestProject
  7. dotnet new -t xunittest
  8. dotnet restore
  9. “dotnet test” will run the test project from the command line and will run the default example test method.

This works fine if you want to “Assert.True(true);” which is what the sample test project will contain to start.  We probably want something more useful, such as testing one of the existing projects.  If that project, however, has been updated to dotnet 1.1, “adding a  reference” to that  project in the current XUnit setup will fail “incompatible versions”, since cli creates a asp.net core 1.0 test project.  Update the project.json.  This is what I have currently:

 

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable"
  },
  "dependencies": {
    "System.Runtime.Serialization.Primitives": "4.1.1",
    "xunit": "2.1.0",
    "dotnet-test-xunit": "2.2.0-preview2-build1029",
    "Microsoft.AspNetCore.TestHost": "1.1.0-*",
    "Microsoft.AspNetCore.Diagnostics": "1.1.0-*",
    "Microsoft.Extensions.Logging.Console": "1.1.0-*",
    "Microsoft.AspNetCore.Mvc": "1.1.0-*",
    "Microsoft.EntityFrameworkCore.InMemory": "1.1.0",
    "Moq": "4.6.38-alpha"
  },
  "testRunner": "xunit",
  "frameworks": {
    "netcoreapp1.1": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.1.0"
        }
      },
      "imports": [
        "dotnet5.4",
        "portable-net451+win8"
      ]
    }
  }
}

 

The next problem I ran into, was how to test a repository that is instantiated with a reference to a DBContext.  I first landed here, and spent more than an hour trying to get this work in asp.net core.  This turned out to be a wild goose chase.  I instead found a way to test with lightweight in memory objects.  The example below uses a DbContext that defines a DbSet of Groups (these are defined in the main project).  It also uses a User Repository based on ApplicationUser.  I have a custom method (FindDictionary) that returns a dictionary of the users.  The return is mocked in the example.

 

 

using Microsoft.EntityFrameworkCore;
using Moq;
using System.Collections.Generic;
using Xunit;

public class GroupRepositoryTests
 
 
{
   DbContextOptions<MyDbContext> _appOptions;
   Mock<IUserRepository> _userRepository;
 
 
    public GroupRepositoryTests()
    {
 
         _appOptions = new DbContextOptionsBuilder<MyDbContext>()
            .UseInMemoryDatabase(databaseName: "MyApi")
            .Options;
 
       _userRepository = new Mock<IUserRepository>();
 
        _userRepository.Setup(x => x.FindDictionary(It.IsAny<string[]>()))
            .Returns(new Dictionary<string, User> {
            { "joe@msn.com", new User { 
                 UserName = "joe@msn.com",
                 FirstName = "joe",
                 LastName = "shmoe"} }
        });
     }
 
    [Fact]
    public async void ShouldCreate() {
 
        using (var appContext = new MyDbContext(_appOptions)) {
 
            appContext.Add(new Group {
                 Name = "Group1",
                 Description = "An example group",
                 OwnerId = "joe@msn.com" });
            appContext.SaveChanges();
 
            var g = new GroupRepository(appContext, _userRepository.Object);
 
            var a = await g.GetAsync();
 
            Assert.True(a.Success);
 
       }
 
    }
}

 

Seemingly simple implementations (e.g. setting up testing for a repository) can eat up a good part of a day;  and that’s before any meaningful tests are written.

Posted in Uncategorized

IIS and the Event Log

The event log has a series of errors that I’m trying to work out.  Here are some assorted notes:

Rackspace was very helpful and recommended I tackle the WAS errors which are coming from the the asp.net clr.  But I can see no details.  I am offered the chance to open JIT compiler in Visual Studio – but I do not have VS installed on the server (need to turn off server-side debugging in web.config).  Noted that sChannel errors can often be ignored unless tied to other specific events at same time.    Recommended looking at Event Viewer Security for repeated failed login attempts (hack attempts).

Recommended using the event viewer log to isolate error times, and then review the IIS logs for errors at that time.   Also recommended failed request trace.

They also recommended decreasing my Appool recycle time (RegularTimeInterval in minutes) to about 12 hours (720 min) from the default of 29 hours (1740 min), and to   Reboot server monthly.

No recommendations for any good log analyzers, but I found the log parser which allows for sql statement queries on the log files via a console app or powershell.  There is also a Log Parser Studio which adds a GUI.  An example query: SELECT * FROM ‘[LOGFILEPATH]’ where time >=’11:58:00′ and time <= ’12:00:00′.   The LogFilePath is set before as a single file or collection of files.  Analysis can be for any number of file types from IISW3CLOG to csv files.

There is also Debug Diagnostic Tool which apparently takes some set-up. Have not tried this yet.  It apparently helps troubleshoot hangs, slow performance, memory leaks…  There is a summary on the basics of setting up debugging.


							
Posted in Uncategorized

Identity and the Client

Login has always seemed like an after thought.  As a solo developer, projects grow from shiny new, interesting features and technologies.  Identity, authentication (logon) and authorization (roles) are tacked on later.  Identity is boring.

Yet identity seems to be the critical foundation on which the rest of the application relies, and it is really, really difficult to get right (as evidenced by nearly daily reports of website hacking).  I have never written any mission critical applications (e.g. medical devices) or those requiring tight security (e.g. patient privacy, banking).  But I still need a reasonably secure login.  In the 2000-2010, I used asp built in asp identity solutions and later asp.net MVC.  They have instant, built-in support for identity databases in the framework, as well as external authentication (e.g. Google, Facebook, Microsoft).  There are even anti-forgery tokens, sterilization of input data (to prevent sql injection attacks) and other security measures.  Easy to use – click a checkbox or dropdown on the new MVC project, and it just works.  Magic.

But that is, I am told, not the 2016 way to do things, especially not for working with un-tethered javascript client frameworks (e.g. angular).   These clients should call apis on the server with token-based authentication and authorization.  Tokens are  hard.  Maybe not so hard if you use Auth0  which looks like a very nice solution.  But that is not the direction (or maybe better described as layer of indirection) I want to travel.

Server and client development in 2016 seems to be cluttered by utilities/services, that for a price, will replace programming challenges with magic boxes:  key, challenging functionality wrapped  in shiny packages with catchy names.  I send data in and get data out.  What goes on inside is mysterious.   Libraries from Nuget or NPM do this as well (e.g.  newtonsoft,  lodash), but at least with libraries I understand the basics of what occurs inside.

In contrast, cloud computing (e.g. Azure, AWS) despite all its great conveniences including spinning-up 1 or 50 servers throughout the world, is in large part composed of magic boxes.   For my needs, “owning” the whole server still makes sense.  And the costs of adding magic boxes, cloud computing, OAuth, Firebase… could quickly add-up.

So, in the last month I delved into identity server 4 (built on asp.net core).    In the recommended configuration, Identity Server is given its very own asp.net core project.  It can use MVC controller/views to login in a user and access their roles/claims.  Identity server defines clients with scopes (e.g. openId, Profile, “api”) as well as users.

What is incredible is that the primary developers,  Dominick Baier and Brock Allen have set-up an entire ecosystem for identity, along with  great tutorials, documentation and many samples and starter projects. I’ve worked through the tutorials and have used the Quickstarts starter project #6 – which in turn uses asp.net identity database and it works well.  Using external providers (e.g. Google, Microsoft, Facebook) also works, but I am still trying to reconcile username/password standard logins with multiple external providers.  In effect, how to keep a consistent set of claims for the same user who authenticates today with a username, tomorrow via Google, and the next day via Microsoft.  Kevin Dockx covers some approaches to this in his Pluralsight course.

You’ll want to work through the Identity Server tutorials first – the setup is not intuitive.   One project/asp.net core is the identityserver, a second is the api “client”, and the third is the javascript app or MVC app.  When a user needs to login in for the javascript or MVC app, they are transferred to identity server for login.  Once login occurs, a token is created to allow the end user javascript or MVC app access to the web api.

Testing in postman is fairly straight forward using this workflow.   You post to the [identityserver_URL]/connect/token with body data (x-www-form-urlencoded) with client_id, client_secret, grant_type, scope, username and password.  These are all defined in identityServer config except for username and password.  However the grant_type text is tricky.  With trial and error, I found “password” for resourceOwnerFlow and “client_credentials” for clientCredentialsFlow;  Then I found that valid grant types are listed at  “//localhost:5000/.well-known/openid-configuration”.  Once you post to the token service, an access token is returned and that can be copied into the api call header to allow authorization. The access token parts can be visualized by pasting it at the jwt.io site.

Everything worked well until I attempted deployment.  Then I became stuck on an error – which started as a 500 – internal server error.  Once I added logging to file (NLog Extensions – look for the resulting log files at c:\temp\nlog…), I found  IDX10803: Unable to obtain configuration; it could not open the /.well-known/openid-configuration.  This worked on the dev machine with localhost:5000, but on the server, I needed to use a full URL.  This was easy enough to configure  using appSettings.production.json, but it was not finding the file.  After spending hours, it turned out that I was trying to use //myUrl/identityServer and it would not work – the URL was not being found.  Instead, I needed to use  https or http//myUrl/IdentityServer.

One additional issue that also took me several hours to figure out.  The IdentityServer example projects use “services.AddMvcCore”, not “services.AddMvc”.  As I learned, AddMvcCore is a barebones subset  of the framework.  This worked fine until I started to add additional functionality, such as swashbuckler.swagger, a great Api helper utility, and while following examples, I could not get it to work.  Finally, once I changed the configuration in startup to “services.AddMvc”,  all worked.

There are multiple road blocks to understanding IdentityServer,  OAuth2 and OpenID in Asp.Net.  As of November 2016, Pluralsight has 2 courses.  Both use Asp.net (not core) and Identity Server 3 (not 4), but the overall concepts, flows/grants, clients, scopes, users – are the same as for the newer versions. I started by watching the “OpenId and Oauth2 Strategies for Angular and Asp.Net” – and this is a very thorough and in depth but quite overwhelming; rather than an introduction, it is more of a “everything you ever wanted to know about IdentityServer”.    I then watched “Using Oauth to Secure Your Asp.Net Api” which was more geared at an introductory level and easier for me to get my head around.   In retrospect, I would watch this first before the other.  He does however recommend using ResourceOwnerPassword for Xamarin Mobile apps authenticating with identity server and this may be insecure due to transfer of client username and password; I think authentication flow may be better for this.  ResourceFlow appears to be ok for api since the API is on the same server as the Identity Server.

It took me a while to understand that access-tokens are just for access.  They do not include user claims (e.g. username, email…).  To get user claims, you need an id-token or you can tack on claims to the access token, but this requires the ResourceOwner flow.   In addition, understanding how roles fit into the new claims world was not intuitive, but is explained here.

It took me days to figure out how to use Asp.net core identity (and EF/SQL database) with Identity Server.  Most of this is well layed out here.  But what took me the longest time, was in getting the username (user email) and role (e.g. admin) to be passed as claims to the client Api.  I created an Identity Server client with AllowedGrantType = “ResourceOwnerPasswordAndClientCredentials” flow to do this.   Then create a custom scope that uses the following Claims (as described here):

                 Claims = new List<ScopeClaim>
                        {
                            new ScopeClaim("name"),
                            new ScopeClaim("role")
                        }

As an alternative, you can add “IncludeAllClaimsForUser = true” to the custom scope, but it adds additional claims I do not need.

In the Api, I added the follow to Startup.cs Configure.  This stops default Microsoft jwt mapping that tends to mess a few things up:

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

 

I also added the following lines in the Api to the IdentityServer setup section in Configure:

app.UseIdentityServerAuthentication(new IdentityServerAuthenticationOptions
            {
                Authority = authority,  
                ScopeName = "myscope",
 
                NameClaimType = System.Security.Claims.ClaimTypes.Name,
                RoleClaimType = System.Security.Claims.ClaimTypes.Role
 
            });

 

Now that the Api can identify the username, I can use that in database fields to record the author of entries.  I created a separate UserRepository within the Api, that calls the asp.net identity database to obtain additional user info and can be joined to the other tables.  This is a bit clunky however.  When the api held both a “notes” table and a “user table”, note rows had a author foreign key that pointed to a user in the user table.  Therefore a query for notes could automatically include information about the user.   However, with the user table in another database, there needs to be 2 queries and a join.

As an alternative, I could create a SqlServer View inside the notes database, from the table in the User database (using specific column names instead of the *):

CREATE VIEW [dbo].[vUserMirror]
	AS SELECT * FROM [users].dbo.[aspnetusers];

The users view could then be linked via foreign key to the notes, and  a notes query could once again include user info.   The tough part with this is how to create this automatically with migrations.   Another option is to create a data project that holds all the data files including identity.  Then client projects could use the data project and each could extend ApplicationUserDbContext with their own database implementations.  In this way different client projects could share the same central identity database.  However, it would seem fragile as more tables were added for different client projects – the database would grow, migrations for one client project might break the data for another project.   Still need to work with this.  I have searched for examples online for how to solve this and have not found a solution.

The second problem with this is that now I have two DbContexts inside the project, and migrations are no long as straight forward.   Add-migration inside of Package Manager Console will give a cryptic error: “Unrecognized option ‘–build-base-path'”.  However, if the full non-powershell command is issued (“dotnet ef migrations add mymigration”), the error will now be “More than one DbContext was found. Specify which one to use. Use the ‘-Context’ parameter for PowerShell commands and the ‘–context’ parameter for dotnet commands.”.  So, with a project containing more than one DbContexts, use:

 

 

  dotnet ef migrations add myMigrattion -c myDbContext
  dotnet ef database update -c myDbContext


I’ve been watching as Identity Server 4 rapidly nears a full 1.0 release (as of early November it is in RC3).  I am amazed at the productivity and efficiency of Dominick Baier and Brock Allen, and the others working with them.  It seems every time I access their github repo, they  made their last changes, just hours before.  Their documentation  pages are stellar as are their samples.    I wish their London SDD Conference Workshop allowed for online subscription viewing.

 

 

 

Posted in Uncategorized

C# Scripting

C# is my favorite coding language.  It seems crazy to master another language (e.g. Python, Powershell) just to write single file scripts for isolated tasks.    I can get by in these other languages (with the help of stack overflow and google), but scripting in C# would be so much faster and easier. And now C# scripting works very well within Visual Studio Code (with some set-up).

Step 1: Install ScriptCS

Step 2: Install SVM

Step 3: In Visual Studio Code, install the extensions: C# for Visual Studio Code (omnisharp) and ScriptCSRunner.  Enable the extensions.

Step 4: Bug workaround until asp.net core version of Omnisharp  C# supports scripting.  See here

Step 5: Reboot the machine.  Pick a file folder and start VS Code.  Create an empty project.json file containing only: {}.   Create a C# script file with CSX extension.  Type C# code and run it with Ctrl-Shift R.  Select a few lines and run just those with Ctrl-Shift-R.  Instructions are here.

 

So far, importing C# libraries into C# script files while working in VS code is the only problem I’ve run across, and as yet, is unsolved.

Posted in Uncategorized

The dark side of Asp.Net Core: Deployment – Part 2, Databases

In the last episode of asp.net core deployment, we saw the author struggle and nearly give up with a basic asp.net core deployment to IIS; then he magically slays the evil giant, the app works and all is well again.  Now, days later, we return to find the author, with much less hair (its all been pulled out), struggling with a new foe: asp.net core WITH  SQL Server database deployment.

In this episode, I will spare you the gory details of hair loss, and just highlight the steps.  Before getting started, I will channel Andy Rooney to complain about the Start Icon on Windows Server 12.  The one in the bottom left corner that only appears on highlight.  Why is that?  Why not show the icon without hovering (or at least give the option to show it).  Alright, enough distraction; move along.

Deploying an asp.net core sql server database:

Step 1. Create a basic asp.net core mvc app with identity (individual user accounts).

Step 2. Create a database on the server in sql server.  FYI:  sql server express can be used on the server for small databases (<10 GB).  Setup permissions for the asp.net core “IIS AppPool\[poolName]” or Network Services or both.   The process is described well here.

Step 3: Formulate a sql connection string for the server instance (example here).

Step 4:  In the top directory of the asp.net core app, duplicate the appsettings.json file, and name the copy, “appsettings.production.json”.  In the project.json file, add this filename to the “publishOptions.include” array.  In the “appsettings.production.json” file, change the default connection to the one created in Step 3.  The basics about this are described here.  As a side note, data in the appsettings file can be accessed either with objects, or given json {Identity:{Authority:’MyAuthority.com’}} with:

Configuration.GetSection("Identity:Authority").Value

 

Step 5: Change the startup.cs file to  get an injected instance of the DB Context. Then use this to call the migrate function on the database (described here).  If no migration exists or is outdated, it will be updated; if it is already uptodate, nothing will change.

public  void Configure( ... ApplicationDbContext dbContext)
        {
            ..

            dbContext.Database.Migrate();

            app.UseIdentity();

Step 6: Publish the app in Visual Studio.

Step 7: Navigate your browser to the URL of the deployed app.

It fails (at least it did for me), with a 500 internal server error.  It is not fixed by changing the web.config to allow for full error messages in the browser,nor by changing the Startup.cs Configure method to call app.UseDeveloperExceptionPage() and  app.UseDatabaseErrorPage().  My error, at least,  was too early, occurring during the startup.cs config method; it would therefore not show up in the browser with any level of coercion.

Instead, on the server, open a command window (or powershell window) in the asp.net core application’s root directory and type “dotnet yourapp.dll”.  Now all the diagnostic info is there with a full call stack.  Fix the errors.

Done. Deployed app with database works.  Another giant slayed, another notch in the armor.  Glue your hair back in place, ready for the next frustration.

 

Posted in Uncategorized

Rapid Real Data

On a recent night I saw a critically ill, hypotensive woman with prior ectopic and periumbilical abdominal pain.

I completed my exam including a FAST scan, two saline lines running open with pressures at 60-70 systolic, and a call to OB as a heads-up.

Serum pregnancy test pending. I had just heard Michelle Lin, MD on EM:Rap talk about rapid bedside whole blood pregnancy test (using a urine pregnancy test kit) based on this article: “Substituting whole blood for urine in a bedside pregnancy test.” which showed as good or better efficacy then urine – in 10 minutes.

I inquired about getting this test at our facility, and the initial response was that this was a “VBI – very bad idea” based on this opinion blog post without references, as well as an informal survey of the decision maker’s peers.

I was floored that I had presented a compelling article for use of a cheap test, good efficacy, and rapid in the face of critical illness, and it was refuted with opinion. This is not isolated. I am excited that our medical practice is more and more based on evidence, and frustrated that the key obstacle to implementation is based not on science, but on preconceived notions and opinion.

This is not a critique of the opinion-based blog. This is a critique of not recognizing blog posts as opinion and how that differs from case controlled studies. I am, myself, statistically challenged, but this fundamental difference I understand. And reader, you should recognize that what I write here is 100% my opinion and you should not use this to base any medical decision. I also understand that research is messy, and that it can easily be misinterpreted. Then focus on the quality of the article and the evidence, and contrast it with similar journal articles.

My wife, who is statistically adept, works in public health for the state and she frequently has conversations about data and evidence that baffle me. My preconceived notion of public health is that they are the purveyors of cohort health data – analysis, interpretation, forecasting, recognition of patterns. Yet she describes a wide variance in the understanding in public health of data acquisition and evaluation (my words not hers) – basic concepts are lacking, and decisions are based on ancient mantras.

Ten years ago when I looked to Cochrane for advice, most of what I found was “further study needed.” Now, that landscape is changing, and I find more and more guidance on what does and does not work when subjected to high quality study. I feel as if evidence based medicine is actually better directing me today.

Posted in Uncategorized

The dark side of Asp.Net Core: Deployment

To host my angular 2 application (fpnAngled) from the last post, I used  standard asp.net framework (4.6).  However, as I was getting ready to implement IdentityServer, I realized that asp.net and angular 2 routing were not working together.  I could not get the html5 URL format in angular 2 to route properly through asp.net, despite changing web.config to include rewrite rules for filename and directory.  The angular app would route well, until the browser window was refreshed – then it would not be able to find dependencies or reload.  So after hours of tinkering, I explored using asp.net core to host the angular app.  In retrospect (see below), I might have seen success with the old asp.net framework by changing the index.html base href.

The new asp.net core framework works very well on development machines.  Great new features, simplified structure, nice integrated web server – all in a small package.  As usual, Visual Studio works well for core development.

So I was surprised, when I attempted deployment.  This should be simple. Well – simple by MS standards.  Creating and configuring IIS virtual directories with correct permissions always seems to take me an hour of tinkering.  Often the problem is in setting security for the virtual directory to allow “IIS APPPool\MyNewPool” in addition to network service, my admin account and now WDeployAdmin.  Why isn’t some of this automatic when you make a web application?  Why is the apppool  not even listed in the potential users to add in the security tab (and why do I need to click through 5 windows to get the list of users to add to the folder security).  Sure, I could use Azure,  and trade one complicated set of controls for another; at least with IIS, it is not all a magic black box. I do sometimes wonder if Microsoft is trying to upsell Azure, by keeping IIS unnecessarily complicated.  And sure, I could work on learning server admin and IIS, but I find it hard enough to keep up with application development.   I do typically get IIS sites eventually working after a period of trial and error, and without opening too many security holes.

With asp.net core, I expected to build, publish, run a kestrel console app via IIS.  I am not even crossing platforms – this is Windows Server 2012; this should flow easily just as in the 5 minute demo videos (of which at this point there are very few on asp.net core deployment to IIS).

I followed the deployment instructions on the official asp.net core site (which is much improved over the typical less readable MS technical documentation).   Still, the documentation is not exactly sequential and key setup information is found on subsequent  pages.  Surprisingly, a large part  of the instruction page is devoted to setting-up IIS for first-time web hosting and the more difficult aspects are sparsely documented.

So, I initially missed the required installation of asp.net core on the server (it was right below how to make IIS serve websites).    I then tried VS Web Deploy but received a cryptic error, so I tried deploying a zipped file package and extracting on the server.  But what should be at the app root: Content folder?  website folder?.  Clearly the zipped file did not have the right deployment structure, so I deleted files in the directory to start over.  But even deleting files/folders  was a chore, since one of the folders needed admin permission to delete (content folder) and win explorer running under the admin account on the server does not have admin permission.  Even trying to change folder security to grant me permission to delete did not work.  Fortunately, powershell worked with remove-item.  This by the way, reminds me of trying to delete multiple nested node-module directories in windows explorer which also does not work and requires work-arounds.  Why is explorer so anemic sometimes?   Why, MS, why do you cause me this pain?

So I returned to web deploy method and realized that the HttpPlatformHandler and Web Deploy v3.6 were first needed on the server.   Installed these and then web deploy did not like either a self signed server certificate or my SSL certificate. Tried making the certificates trusted with MMC.exe.

Tried allowing untrusted certificates, but could not find the correct pubxml at first  (which is by the way under project/properties).   Still did not work (fixed the certificate error, but now had a new cryptic error – disallowed user).  Ultimately, using Rick Stahl’s recommendations for PublishMethod and AuthType,  and I was able to find the following settings in pubxml that did work (and did not require allowing untrusted certificates):

 <MSDeployPublishMethod>RemoteAgent</MSDeployPublishMethod>
 <AuthType>NTLM</AuthType>


Really MS?  Rick Stahl published his fix >3-4 months ago and the pubxml default is still not correct.  On the bright side, once web deploy worked, everything fell into line.  The root publish directory now has the web.config, all the DLLs, and 2 folders: refs and wwroot.   And now the deployed asp.net core site works: static files and the api.

At this point, I was still stuck with my original problem – not being able to navigate to a URL within the angular app.  asp.net would intercept the URL first, or the  angular app would start loading and would look for dependencies at the same level of the passed URL (which would be too deep).  Then I hard coded the base href to the full start directory (//mywebsite.com/myVirtualDir/), and it all worked. Will need to see if I can set the index.html base href dynamically (based on prod vs dev settings).     I also used Ben Cull’s recommendations to use file server.  Not sure if all of this is needed, but it works.

Hours wasted on the seemingly simple goal of passing routing through asp.net back to an angular application to manage. Nearly a full day wasted, and with the scattered  litter of failed trial/error attempts.  Solved, but not without scrapes and bruises, and a frustrating unnecessary waste of time.

Log files are also key to digging into the “500: Internal Server Errors”.  By adding NLog Extensions to the asp.net core app, log files are saved to the dir of your choice (c:\temp\) by default.

I do not typically bash Microsoft’s development tooling.  Sure –  I have NOT  spoken kindly of  MS Outlook, MS browsers, Win 8, pocket pcs….  BUT I am a true fan of Visual Studio, VS Code, any language Anders H. wrote (turbo Pascal, C#, Typescript; I never used Delphi).  I even like Powershell (although I use only the simple parts).

So why, Microsoft, why do you make deployment so hard (even to your own windows servers).

 

Addendum 12/6/2016

Ran into more trouble after upgrading to asp.net core 1.1.  The 1.1 version works very well on the development machine, but not so much on the server on deployment.

On the server, I could not get the command tools to work.  First, I could not migrate/update the database (“dotnet ef …”) from the command line.  After much fiddling, I had to manually import database via sql server management studio.

Still on the server, I then tried from the command line, “dotnet myapp.dll”.   This should run the app in Kestrel and give me lots of debug info for problems that arise.   Unfortunately, I kept getting this critical error:

err_kestrelerror

I made sure nothing atypical was being called in startup.cs or program.cs.  I updated all Core SDK and Runtime on the server to 1.1 (and uninstalled 1.0).  I looked around for CLI updates – but it looked like with “Dotnet –info”, I am using the same cli version  (1.0.0-preview2-003121) on both my server and the developer machine (where the CLI run command works).  The Kestrel error could have been a bit more descriptive.   As a last Hail Mary, I tried calling the api via IIS using the URL, and it worked!

I have no idea why CLI Kestrel would not load (yet work via IIS) and why the CLI database tools (dotnet ef…) would not migrate/update the database.  But I’ll add it to my baffled list and move on.

I do not consider 1.1 releases to be extra-early adoption (these are not beta or alpha), but I feel that has been a painful adoption of fully released product.   Asp.net core is a great overall product, but watch out for its sharp edges.

 

One other issue that can be painful to overcome, especially in testing, is CORS.  Rick Stahl has a nice summary of working with this in asp.net core here.

 

Posted in Uncategorized