.Net Core on Mac: Connecting to SQLServer

In the previous post I described how I set the basic development environment using Visual Studio Code. You cannot do much application development without a database though, and while there are many options for database connectivity, since this exercise is  about using a Microsoft development stack on a Mac, the database of choice is inevitably Sqlserver.

When I started thinking about all these, the only option I had was to install Sqlserver on my Mac in a virtual machine. And this is what I did. I installed Virtualbox with  Windows 10 LTBS and in it, I installed Sqlserver. I won’t go through this process as it is not Mac related. The point of interest is the network connectivity for the Virtualbox: in order to be able to talk to the Sqlserver inside it, one needs to use bridged networking.


Also, since we are going to connect to Sqlserver through the network, TCP connectivity must be enabled.


To test the connectivity you can use command line tools, a Mac client like Navicat Essentials for SQL Server or connect directly through the Visual Studio Code.

There is an extension for this:  mssql for Visual Studio Code

Like all extensions in VSC it adds a bunch of commands


The extension works from within the editor: you open a document and change the language mode to SQL.


Then you create a connection profile and connect to the Virtualbox Sqlserver. Upon a successful connection the footer of VSC changes to this:


And now the party begins.

In the opened document you type sql commands and execute them running the Execute query command. The results are fetched in another document and the screen splits in two: sql on the left, data on the right.

Screen Shot 2016-11-26 at 8.52.57 PM.png

From this point on, you have all the tools in place to dive into some real development.

Except that…

connecting to a Virtualbox hosted Sqlserver is not the less resource hungry solution.

After I set the above, Microsoft made a lot a good announcements in the Connect() event. Among them was the release of Sqlserver for Mac though Docker, which promises a lighter solution. The docker container runs Ubuntu linux, so there is no real Sqlserver for Mac. Just a better workaround. But I will leave this for a future post.

.Net Core on a Mac: Setting the development environment

Let’s begin from the beginning: I installed  .Net Core SDK ,  Visual Studio Code (VSC) and the C# extension. The tricky part was the SDK which uses OpenSSL . I had  to install it  beforehand with Homebrew.

At this point a basic development environment is place. But since I didn’t want to develop a CLI application but an Asp .Net MVC one, and since VSC does not provide project scaffolding like it’s big brother, Visual Studio, I had to install yeoman for this task (another cli tool) a task that, requires Node.js so that you end up with npm and finally run

npm install -g yo generator-aspnet bower

(Yes, it has to have bower too).

And now, everything is ready to start a project. I run:

yo aspnet

and got

     _-----_     ╭──────────────────────────╮
    |       |    │      Welcome to the      │
    |--(o)--|    │  marvellous ASP.NET Core │
   `---------´   │        generator!        │
    ( _´U`_ )    ╰──────────────────────────╯
    /___A___\   /
     |  ~  |     
 ´   `  |° ´ Y ` 

? What type of application do you want to create? (Use arrow keys)
❯ Empty Web Application 
  Empty Web Application (F#) 
  Console Application 
  Console Application (F#) 
  Web Application 
  Web Application Basic [without Membership and Authorization] 
  Web Application Basic [without Membership and Authorization] (F#)

I chose

 Web Application Basic [without Membership and Authorization]

and was good to go.

Or, was I?

Client side development encompasses tasks like building css from sass or less, bundling and minifying. I had to accommodate for these too. I decided for scss so I had to install sass.

gem install sass

(Has anyone been counting the package managers used so far? I will provide a count later).

And, per the .Net Core tutorials and documentation, I had to install gulp for sass compilation (and bundling/minification). Thank God, npm was already in place.

npm install --save-dev gulp

At this point I could open my newly created project (ok, the screenshot was taken later).


Database connectivity would have to wait a bit, until we get the basics straight.

The last piece I had to install was the C# extension. Yes, C# is not supported by default! You need to add it as an extension from within VSC.

VSC is mainly addressed to javascript developers, it seems.

So, to come here I have used the following package managers:

  • brew
  • npm
  • gem
  • bower
  • nuget (internally in VSC)

and two additional cli tools

  • yeoman
  • gulp

Unfortunately, after having done all the above, I found out that gulp will be discontinued in future releases (Bye, bye gulp).

And advancing a little bit with the configuration, I found also that the current project.json is going to be replaced by MSBuild (Bye, bye project.json).
Honestly, this gave me the creeps, not because I have any particular affection for either gulp or project.json but because it shows a fickleness of ‘heart’ towards the adopted affiliations. If one wants to adopt something new, the last thing he needs in uncertainty.

Having said that, it doesn’t seem to be a compromise on Microsoft’s newly developed commitment to openness, as, today, they announced joining the Linux foundation and they released Visual Studio for Mac (preview).


It’s been now quite a few days that I have been working with the current environment and apart from some annoyances that I will list below, I am rather happy, mostly because VSC is not just an editor. It has a lot of IDE capabilities, something that I have been missing to other lighter editors, or found too cumbersome to work with.

And since Intellisense is one of my main reasons for satisfaction, it is its shortfalls that frustrate me the most:

  • Version management in project.json is messy. Intellisense suggestions sometime are wrong (I got hints for version 2.0.0 and 3.0.0 where the package is still in 1.x.x), other times they do not show up at all.
  • Enabling Visual Studio Code Taghelpers did not help. Taghelpers Intellisense does not work. I posted a relevant question in Stackoverflow which, to the moment of writing, remains unanswered.
  • After correcting some misprints or wrong references in the code, there are artifacts left behind (red squiggly lines, underlining the problem that does not exist anymore). They go away with the first compilation though.

But with the current environment I have done a lot of progress in two areas: after creating the basic views and controllers, I spent a lot of time in route configuration and localization, which, I remind to those that haven’t read my previous post, is to migrate the company website from WordPress to Asp Net MVC.

.Net Core on a Mac

It’s been ages since I blogged anything. More, anything technical. Since I am in the process of experimenting with ASP .NET Core on my Mac, I thought to take the opportunity and log this journey here.

So far I have done three things:

This isn’t as straightforward as just installing Visual Studio Code. To have scaffolding one  needs to rely on CLI tools, and to do some client side development on the usual suspects: bower, jQuery, bootstrap etc. Which means you need to spend a lot of time  with the Terminal.

  • Set up a development database

While one can experiment with SQLite or MySQL, I wanted the real Microsoft thing, SQL Server, and since this isn’t available for Mac I used Virtual Box with a Windows 10 LTSB guest, where I installed SQL Server Express.

To connect to the database from the host, the VirtualBox has to be on bridged networking and SQL Server should be accepting TCP connections.

  • Found a relatively simple project that entails the most common workflows.

Our company’s website  is multilingual and it is WordPress based (no wonder). While the blog  parts serve their purpose nicely, the pages are bloated (HTML-wise) and have a lot of javascript code running (for a reason) which could benefit from a slimming diet.

So, I thought, why not try to migrate the WordPress pages (not the posts) to an MVC site based on Asp .Net Core. To make things more interesting, I want to add some dynamic content too, pulled from our app’s database (why should I be bothering with SQL Server if I didn’t?).

And here I am. So far, I have made some progress which I will relate in subsequent posts. This post is only an introduction to the theme. If you have interest in such experiments, stay tuned.

Bookmeta: a needed update

Yesterday, while I was trying to show a friend the calibre plugin I have created to extract and retrive Greek Book metadata, to my surprise, I saw that it was not returning much. Obviously the biblionet page had changed and the script the plugin was relying upon for the metadata retrieval needed an update.

Fortunately, it was just a couple of hours work, the new version is available and functioning. Enjoy

Responsive Images: the solutions so far and a mixed new one

I read the other day this fine article by Mat Marquis about his experiences, searches and conclusion on the issue of responsive images. It sparked my interest to look at the subject a bit more thoroughly.

What are the responsive images? Or, better, what they should be? For a more elaborate explanation read Mat’s article. For me, it suffices to say this: an image is considered responsive when it adapts both to the size of a viewport as well as to the bandwidth of a device. Usually, these two go hand in hand: the smaller the device, the higher the probability that is connected to a slow network (like our mobile phones).

It’s this second requirement (bandwidth) that makes the issue of responsive images complex. Because there is no ubiquitously  accepted methodology or technology to measure the relative abundance or scarcity of this resource.

So, the rule here is simple to understand, yet difficult to implement: the less bandwidth we have at our disposal, the smaller the file size of an image should be.

This rule, though, is meaningless, if taken in isolation. A desktop computer with a huge screen and a sluggish network connection does NOT need small file size images, as this can impact adversely the quality of a web page rendered to its full extent. We should be talking about smaller file sizes in conjunction with the requirement for smaller image dimensions.

Enough said about theory. What are the proposed solutions to the problem?

I have traced four kinds of solutions:

  • CSS based
  • Script based
  • Server hacks
  • A combination of two or more of the above.

Surprisingly, there is no pure HTML based solution. And this is what Mat pinpoints in the aforementioned article, as well as what he considers the road ahead.

Here is an example that highlights  how html should be according to Mat:

   <source src="high-res.jpg" media="min-width: 800px" />
   <source src="mobile.jpg" />
   <!-- Fallback content: -->
   <img src="mobile.jpg" />

What do we have here? A proposal for HTML5 to treat the image tag much like the video or audio tag, along with the subordinate source tags and their  media attributes that allow us to load different images utilizing media queries.
It’s a very elegant solution with two drawbacks:

  • If we ever come to the point where bandwidth is not an issue, the solution will become irrelevant: the current img tag with the resizing it allows, is less verbose.
  • It’s a solution currently out of our control.

Let’s now take a closer look to the existing solutions.

CSS Based Solutions

There is no way to set the source attribute of an image through CSS so this approach relies on a trick: use a substitute for the img tag that can be set through CSS. The handy one is the background-image attribute for block elements. Media queries are used to determine which image to assign to this attribute. For example:

@media screen and (max-width: 480px) {
 div.someimage {
  background-image: url(small.jpg);
  background-size: auto;
@media screen and (min-width: 480px) {
 div.someimage {
  background-image: url(big.jpg);
  background-size: auto;

You would probably need some more rules here to make it work in a real case, but this is for demonstration purposes only. When the page loads, the media queries determine which part of the stylesheet is applicable, and this in turn, determines which image to fetch.
The problem with this solution is that it is not semantically correct and that it alters the behavior the user expects from the images (i.e. can’t right click and download). Let alone that it poses a burden to the web developer and the user that will create future content.

Script Solutions
The information about the two or more image files needed could resize inside an image tag with the use of data attributes : <img src="small.img" data-bigimage="big.jpg" />
Once the dom has finished loading, a small script can be put to work to a. determine is there is a need for a bigger image and b. if yes, substitute the value of the src attribute with the value of the data-bigimage attribute.
This solution is not optimal since a desktop computer will have to load two images (small.jpg first and big.jpg later) while only one is needed.
To save bandwidth and speed up things the image tag could come without a value :
<img src="small.img" data-smallimage="small.jpg" data-bigimage="big.jpg" />
With browser detection or media queries we determine which one of the data attributes fits our purpose and substitute the image source attribute with it. Then only the desired image is loaded.
But this solutions fails when the browser does not support scripting, or the user disables it, or when, for some reason, the script stops executing before reaching this point.

      if(window.screen.width > 480){
          this.src = $(this).attr('dataset['bigimage']');

Server based solutions
If the server has a means of knowing upfront what kind of device it will be servicing, it can determine also what kind of images to send. Device recognition is a shaky issue, mostly because it relies on information passed from the browser to the server, something that can be altered or forged. But, assuming we have it, then the method would work as follows:
Image tags source is set to a high resolution image.
If the server detects a small device, then a script kicks in to resize the image, serve it and cache it for future use.
The benefit of this approach is that it requires no changes to the HTML. If device recognition fails, then a big image will be sent to the device which might be difficult to load but the page won’t break. (For more info on this approach look at adaptive images).

Mixed Solutions

Javascript is the extra ingredient most often needed in conjunction with another approach. So, for instance, in the server based solutions mentioned above, one could determine the device dimensions through a cookie set by javascript on the page HEAD tag.

document.cookie='resolution='+Math.max(screen.width,screen.height)+'; path=/';

A new(?) approach

If I were to choose one of the above solutions, I would go for the CSS one. This is both a matter of personal preference as well as because media queries is a really handy and unobtrusive way to determine device.

So to mend the shortcomings of the solution presented above I would augment it with the help of javascript. I would let the browser determine the device through media queries and load the images as background images of div elements and then run a script to change these elements to proper images. To determine which containers’ background images should be ‘translated’ to proper images, I would use an distinct class (‘.responsive’ in the example below).

        var imgsrc = $(this).css('background-image');
        imgsrc = imgsrc.substr(4, -5+imgsrc.length);
<img src="&quot; + imgsrc + &quot;" alt="" />

The above are not meant to serve as a tutorial of some sort, neither as a comprehensive survey of the solutions proposed. Writing a blog post has always been to me a way to put some order in my thoughts and clarify obscure issues through the valuable feedback a post attracts. And this is precisely what this post serves.
The responsive images problem is an open problem. The solution to pick should be the one that fits mostly to your type of application and the one the diminishes the shortcomings in each case.

Opacity in Internet Explorer

This is an ancient problem: older versions of IE do not support the opacity CSS property. Instead, IE8 uses -ms-filter and IE prior 8 uses filter. Also, in IE, one has to be cautious about the positioning of the element the opacity properties apply to: the element has to be positioned for the filter to work properly. Another trick is to use zoom. 
Let’s wrap this up in the css snippet below:

#page {
  opacity: 0.5;
/* IE7 and less */
#page {
  filter: alpha(opacity=50);
  position: relative;
  zoom: 1;
/* IE8 specific */
#page {
  -ms-filter: progid:DXImageTransform.Microsoft.Alpha(opacity=50);
  position: relative;
  zoom: 1;

It is a pain but it works.

But what I found is that if you try to set these properties dynamically, through jQuery for instance, they are less obedient.

For filter and -ms-filter to be set through jQuery, the element has to be positioned through css and NOT by jQuery.

So one would need something like this:

/* IE less than 9 */
#page {
  position: relative;
  zoom: 1;
  $('#page').css('opacity' , ".5")
}else {
  $('#page').css('filter' , 'alpha(opacity=50)');
  $('#page').css('-ms-filter' , 'progid:DXImageTransform.Microsoft.Alpha(opacity=50)');

This is empirical knowledge though. I don’t know why is it like this.

Web 2.0 without javascript?

A couple of days ago I came across this terrifying presentation from John Graham-Cumming.

Although the topics covered weren’t entirely new to me, put together in one presentation, had an impact.  I came to wonder if and how would the major web 2.0 sites work, if javascript was out of the picture.

I decided to make a little test to find out: I disabled javascript from my browser  and started logging  in such sites to see how would they behave.

Here is the outcome for the three most important for me.

a. Twitter

Most of the functionality was in place: the timeline, friend and followers. From the various buttons on the tweets and the timeline pages, the reply did work but not the fav button.

The direct message and delete buttons did not work either. Same with the drop down where you select a follower to dm, and finally, the followers and trending topics buttons.
But all these are rather trivial. Because most of the tweet buttons replicate user behavior (putting the @ sign in front of another user name for a reply, or the d letter for a direct message).
Not being able to fav, or, more importantly, to delete is a loss, but not a major one.

b. Facebook
Things are worse in Facebook: while Home, Profile, Friends and Settings are accessible, the inbox and chat are not.
Also, from the bottom bar, the applications menu is inaccessible. Most of the edit links and buttons don’t work either and finally the status updates, link sharing , photos etc cannot be submitted.

c. Youtube
Here things are disastrous: without javascript you cannot see the videos! On top, you cannot access your account settings or you mailbox. There was no point looking for more.

A small gallery with pics of the failure areas of the above web applications follows

Readburner chicklets for WordPress.com blogs

This is not a how to blog, but, as it is still under construction, I will blog about all the little tricks I apply here, that might have some use for the rest of the wordpress.com folks.

Here is a little nice one.


Readburner is a service that aggregates all blogposts shared in NewsGator Online, Google Reader and Netvibes.

By counting the number of shares, it creates a popularity list. In effect, this is a truly democratical social bookmarking system, without the hickups of Digg and its likes.

Readburner provides its users with some nice widgets in the form of little chicklets, that display essential statistical measures.

The chicklets come in three flavors:

  • Item of a specific user (i.e. his share page) registed with Readburner.
  • Items authored by someone.
  • Items of a specific source (say, a blog with many authors).

Readburner provides some javascript code that allows anyone to generate the chicklets for his part.

Now, I wanted to put such a readburner chicklet in my sidebar, but I stumbled on the usual wordpress.com problem: no javascript allowed.


Since javascript is not allowed, we have to find a way of displaying the chicklet through pure html.

Let’s see what a chicklet is composed of:

  • an image (the colored rectangle of the chicklet)
  • a number (the counted items)
  • a link (the link to the relevant page in readburner)

As a matter of fact, the number is in the image, so we have to find just two things: the image url and the link url.

  • User.

( The number here is my number from the google reader shared items url http://www.google.com/reader/shared/11232096483858520222.

You have to figure out yours, and replace this one):

The required urls are of the following type:

Image: http://readburner.com/fire/shares.gif?user=11232096483858520222

Link: http://readburner.com/u/11232096483858520222

and the actual html code should be:

<a href=”http://readburner.com/u/11232096483858520222″&nbsp; target=”_blank” title=””>

<img src=”http://readburner.com/fire/shares.gif?user=11232096483858520222&#8243; alt=””/>


which produces:


  • Author:

Image: http://readburner.com/fire/shares.gif?author=Nikos%20Anagnostou

Link: http://readburner.com/u/Nikos Anagnostou

and the actual html code should be:

<a href=”http://readburner.com/u/Nikos Anagnostou” target=”_blank” title=””>

<img src=”http://readburner.com/fire/shares.gif?author=Nikos%20Anagnostou&#8221; alt=””/>


which produces:


  • Source:

Image: http://readburner.com/fire/shares.gif?source=webtropic

Link: http://readburner.com/source/webtropic

and the actual html code should be:

<a href=”http://readburner.com/source/webtropic&#8221; target=”_blank” title=””>

<img src=”http://readburner.com/fire/shares.gif?source=webtropic&#8221; alt=””/>


which produces:


To figure out the proper links for yourselves, first of course you have to add your shared items url in readburner. Then replace your name, blog name or user id in the above code and paste it in a text widget in wordpress.

As I said, I am using Google Reader. The other services might have some slight variations in the url schemes, I did not bother to check. Please do for yourselves.

Good luck and ..happy sharing!