Author Archive

Posted by OmegaMan at September 26, 2018

Category: C#

This was originally submitted to Amazon as a review, which was rejected. Hence I will review it here. I do not get any monies from reviews; but felt that the reviews for the dock on amazon were all negatives and one should review it with the proper way to update the dock.

For HP ZBook 150W Thunderbolt 3 Docking Station 5 Stars.

I receive, from my work, this dock with an HP ZBook 15 G3. Out of the box I did not notice any issues and everything seemed to work fine (with no updates having received this on 9/2018).

But since I was using the laptop/dock with a 34 inch monitor I wanted to make sure the video drivers were up to date. So to update the individual components on the dock, I went to HP’s site and from the dock’s info page, I downloaded two update packages specific for this dock. (See picture of the finalization of the update where it checks individual components to verify if they need updates). Once I did that update the Ethernet port failed on the dock but worked on the machine. Reason being is that the Ethernet driver is held on the PC and not the dock. So the PC was not playing well with the dock; people have assumed its the other way around and given bad reviews.

What I discovered, in researching and reading from others who had the issue as reported on the HP forums was that the Ethernet port on the dock is different from the one on the laptop and needs a specific driver. While doing research to find the exact driver, I discovered that HP, like Dell, has an updater program called `HP Support Assistant. (Makes life soooo much easier to update machines these days with these programs).

I had the site load software which recognized my pc and downloaded the HP Support Assistant. Once installed it correctly determined the make/model and warranty status of the machine. (See pic 2).

Most importantly when one allows it to scan the PC it finds all the drivers which need updating, and updates them with minimal help from the end user.

Since it was a new laptop it had 16 items, the Ethernet port as one of them, which needed to be updated. After all those updates, all worked with the machine and the dock; and its Ethernet port.

Long story short is that other people had real problems, but were unable to update the device & PC so it can work properly. If you use the HP Support Assistant and keep fresh with the updates, at least as of this writing, all is great with this dock.

The only con I notice is that when switching virtual windows on my 34 inch monitor, it is not as fast as when I switch using a different computer with a dedicated graphics card.

I do not work for HP/Amazon or any company entity associated with the dock or PC.

Share

Posted by OmegaMan at March 24, 2017

Category: C#

Not finding a satisfactory method to find invalid Azure Container names, I wrote my own easier (?) to read regex pattern for .Net.

^                       # Anchor; beginning of line. 
(?!.+--)                # String ahead does not contain two dashes 
(?!^-)                  # String ahead does not start with a dash 
(\$root|[a-z\d-]{3,63}) # Either literal root or a combination of lower case. 
$                       # Anchor; end of line

 

Using a negative look ahead convention (which does not match anything) to handle the double dash situation makes it easier to read.

The first rule says from the beginning of the line, there should not be two dashes or to start with a dash. Once valid, then either match the literal text “$root” or then match only lower case letters, numbers or a dash for a total of three characters to 63 characters.

Remember to run the above pattern with IgnorePatternWhiteSpace, which allows us to comment it. Or run without any option by specifying it on one line:

^(?!.+--)(?!^-)(\$root|[a-z\d-]{3,63})$

Here are the rules for Container names:

Valid naming for a Container in Azure Blob Storage.

1.  3 to 63 Characters

2.  Starts With Letter or Number

3.  Letters, Numbers, and Dash (-)

4.  Every Dash (-) Must Be Immediately Preceded and Followed by a Letter or Number

5.  All letters in a container name must be lowercase.

Share

Posted by OmegaMan at April 26, 2016

Category: Powershell

Tags: , ,

While working for a company which used a browser based VPN which needed to have different pinned components in memory. That caused difficulties when having to mostly re-login due to their startup script. I created this PowerShell script to kill specific processes, if running, and launch IE to my companies’ web page.

Get-Process | where {      
                     -or $_.Name -like "Juniper*"<code>-or $___abENT__#46;Name -like __abENT__quot;dsHostChecker*__abENT__quot;</code>
                     -or $_.Name -like "iisexpress"<code>-or $___abENT__#46;Name -like __abENT__quot;Network*__abENT__quot;</code>
                 -or $_.Name -like "dsNcService*"<code>}</code>
            | Stop-Process -Verbose -Force


$ie = New-Object -com internetexplorer.application; 
$ie.visible = $true; 
$ie.navigate("Https://secure.MyCompany.com");

If you have any failures try running this script with admin privileges on the box.

Share

Posted by OmegaMan at June 12, 2015

Category: Entity Framework, Tribal Knowledge, Visual Studio

Tags:

This is a how-to on getting Entity Framework (EF) version 5 and 6 to include stored procs and how to consume the resulting entities in code.

  1. In the EF designer choose Update Model From Database.
  2. When the page to Choose Your Database Objects and Settings comes up which allows one to add new tables/views/stored procs, select the stored proc of interest. Remember the name for the resulting data mapping entity will be the name with the extension _Result.2015-06-12_19-10-39
  3. Once the wizard is finished EF will contain the stored proc in the Model Browser. The model browser can be displayed by right clicking the EF design surface and selecting Model Browser.2015-06-12_19-51-20
  4. Here is an explanation of what has happened.
    (1) You have added the stored proc into the Stored Procedures / Functions as an item of interest.
    (2) EF has created a function import of the stored proc and placed it into Function Imports.
    (3) If EF was able to determine the *result set entity* it will most likely be in the Complex Types folder.
  5. If the mapping has gone right you should be able to call the stored proc off of the EF context in code and it will return a list of the complex type xxx_Result. If it works you will know it, but there could be problems with the mapping.

Mapping Problems and How to Resolve

  • One can delete at anytime the any object in the folders of 1/2 or 3 shown above and regenerate or create a custom mapping. Don’t be afraid to delete.
  • Sometimes very complex stored procs will not divulge the right mapping of the entity in the result set and the resulting complex type will cause failures. One way around that is to create a faux data return in the stored proc which leaves no ambiguity for Ef.
          1. In the database change the stored proc as follows.  Comment out the meat of the result select and replace it with a one-to-one column stub faux select such as this example: “SELECT 1 AS ResultId, ‘Power1’ AS GroupName, ‘Test’ AS Description”. Note to be clear you will need to match every column and name.
          2. In EF’s Model Browser delete all things associated with the stored proc in folders 1, 2 and 3 above.
          3. Regenerate all by usingUpdate Model From Database.
          4. Check the results.
  • If the above steps fail one can always create a function mapping by hand. Be careful not to create duplicates, if so delete all and start over.
        • Open up and find the stored proc you inserted into folder #3 above. Right click and selectAdd Function Import…2015-06-12_20-09-09
        • One can get the column information, change items on the above screen; it is trial and error.
        • You will need to play around with this until you have the right columns for your function import. Be wary of numbered copiesof the complex types which may be created from the mapping.

Remember to reset the stored proc back to its original state instead of the faux stub mentioned.

Share

Posted by OmegaMan at May 28, 2015

Category: .Net, Database, Entity Framework

Tags:

To achieve cascading deletes, one must specify the cascading deletes on the FK relationships from the top level table in the database. The default is not to cascade.

Here is the visual Process in SQL Server Management Studio.

  1. Select the top level table which will handle the delete and right click.
  2. Select design mode.
  3. Right click any row in the design mode.
  4. Select Relationships.
  5. Find all the FK relationships and set them to cascade.

Then in Entity Framework update the edmx file after these changes are made so entity framework knows about the cascading constraint.  Once all this is done a cascaded delete is possible using Entity Framework.

 

omegacoder_dot_com_EF

Share

Posted by OmegaMan at May 25, 2015

Category: Errors, Visual Studio

Tags:

I began to receive the error message box in Visual Studio after a restart from an install of an unrelated to test Visual Studio plugin (which I have used in the past):

The ‘TestWindowPackage’ package did not load correctly.  … examining the file … AppData\Roaming\Microsoft\VisualStudio\12.0\ActivityLog.xml

Looking at that log showed these errors:

TestWindowPackage.Initialize failed with exception … System.InvalidOperationException:

Loading MEF components failed with the following exception:
The composition produced a single composition error. The root cause is provided below. Review the CompositionException.Errors property for more detailed information.

No exports were found that match the constraint:
    ContractName    Microsoft.VisualStudio.TestWindow.VsHost.PackageContainer
    RequiredTypeIdentity    Microsoft.VisualStudio.TestWindow.VsHost.PackageContainer

 

To resolve the issue I did step 1 and 2 but am showing step 3 in case it helps.

  1. Close all instances of Visual Studio.
  2. Delete all files in AppData\Local\Microsoft\VisualStudio\12.0\ComponentModelCache Note that directory is not found in Roaming as the activity log file, but in Local.
  3. Restart, if that doesn’t work then start over with step 4.
  4. From a Visual Studio Command Line do this operation: devenv /setup /ResetSkipPkgs See (/Setup (devenv.exe)) for more information.
Share

Posted by OmegaMan at October 14, 2014

Category: .Net, Research Ideas, WPF, Xaml

Tags: ,

stockxpertcom_id21167841_jpg_96cea7cca529ddc8485f5b21de62e2f6I was recently asked to provide my humble opinion on the future of WPF , literally how it should it be charted going forward. This is my utopian vision for WPF going forward.

I am going to answer that question as if I had received the the Wonka Golden Ticket and was able to go to Microsoft and ultimately become a sort of  Scott Gu of  Microsoft and direct the future of WPF, here is what I would order the Umpa Lumpas to do. To be clear, I only use the term Umpa Lumpa to mean a hard diligent worker at Microsoft and not any short and blue person or a minion doing one’s evil bidding; heck a combination of both but better paid. 

Before one can come to grips with my future utopia vision of what WPF would be, one would have to look into the past to see what Xaml technologies has provided the the Line Of Business (LOB) developer. For that is what I am, a Line of Business developer who gets paid to create business applications to the highest bidder. My ego is inflated enough to think that my services actually go to a “highest bidder” , but let us not touch that fourth wall of  my reality ok?

Where Have You Been Xaml?

Currently a LOB developer is basically in charge of bringing data and all related business rules to the corporate environs. Historically that vehicle has been Xaml used in WPF, Silverlight and just recently Windows 8 tablet.  All of them use subtle flavors of Xaml to achieve that work. Xaml is great because through the use of MVVM and a kick ass way of leveraging a graphical based solution to displaying that data, it has provided the developer with a rich toolset bar none in the industry. Let me repeat that, bar none people.

What is my anecdotal evidence?

I was tasked with bringing such a graphical solution to a cable industry partner. To anyone who is not aware of the cable industry, it is, now, a few providers spread out over the country, if not the world, and they are frankly a Java shop. The Oracle flag flies over their realm and very few if any .Net projects are done within the differing companies.

With that backdrop in mind, Java, the small company I ended up working for was seeking to bring tools to the major cable vendors. One of the tools needed a graphical front end to allow for a back and forth way of editing business related data for their end clients. There was no Java, still isn’t, technology which could fill that gap, only a Xaml solution in the form of the Silverlight tool provided the best working solution and they took it.

Take out the Silverlight of the last story and keep in mind Xaml and WPF. For what they needed was a way to bring a rich client experience to the end user and there was a viable Xaml based solution available to them.

Xaml As a Means and Not an End

The LOB business developer needs to be able to bring that graphical data experience to the table. “What about the Javascript solutions out there?”, one might ask? Javascript and HTML 5 solutions have come a long way, but frankly any developer who has spent anytime dealing with the non strongly typed environment comes away with a bad taste in their mouth due to the unwieldy nature of any app which grows past a certain size and cannot be managed.

Javascript solutions are frankly unmanageable at a certain point and any developer knows and dreads that.

If only businesses could understand that WPF in a managed language is the best use of large scale applications and providing a rich, yes rich, client experience to the end user, it would go a long way.

WPF’s Achille’s Heal AND HOW THE CLOUD COULD RESCUE IT

Very few businesses want to install, and update applications to the end user. Period end of story.

That is why Silverlight to both the developer and businesses was so appealing. Not that it would work cross browser or anything else, just that it provided a vehicle to supplant IT and go around having to install applications on locked up corporate PCs.

From the cloud came a solution to avoid the IT department and it was, and still is, the best way to bring data and more critically the viewing of said data to the corporate person.

WPF Future

That leads me here, to my goal of WPFs future. If I could provide the corporate end user with a way to bring a rich graphical experience without having to install it, that would be my goal.

How would I achieve it?

I would bake a WPF visual experience into IE. Where IE under certain approved circumstances would provide a gateway to a rich client experience. As a developer I don’t care what IE has to do, just that it would allow me to interact with a client at the end of the tunnel from the server without having to get my feet wet in Javascript and HTML 5.

As a LOB experience, I don’t care that this it would not be available outside IE; because my target audience is required to to have IE for this business purpose. In as much as we provide a PC, not a Mac to the corporate user, we provide a specific browser.

If I could provide the best of WPF, in a better browser experience (not Silverlight) that is what I would be the task the Umpa Lumpas would create.

WPF Realities

The reality is that someone like a Scott Gu would have to champion such a project at Microsoft. This person would have to sell the idea of a managed GUI environment (similar to C# being managed and not C++)  to the browser based LOB customer as a WPF Future solution.

But I truly believe it would be a game changer in the business world…just that I don’t have the golden ticket and my voice is just one out here on the Western Front of the internet at this time.

Share

Posted by OmegaMan at March 31, 2014

Category: .Net, C#, WCF, WCF, XML

Tags: ,

When creating a C# WCF service (version .Net 3.0 and above) there may be a value in identifying the clients (consumers) which a web service is providing operational support to. This article demonstrates in C# and config Xml how to have clients identify themselves and pass pertinent information within the soap message’s header. That information in turn will be processed by the Web Service accordingly.

Client Identifies Itself

The goal here is to have the client provide some sort of information which the server can use to determine who is sending the message. The following C# code will add a header named ClientId:

var cl = new ActiveDirectoryClient();

var eab = new EndpointAddressBuilder(cl.Endpoint.Address);

eab.Headers.Add( AddressHeader.CreateAddressHeader("ClientId",       // Header Name
                                                   string.Empty,     // Namespace
                                                    "OmegaClient")); // Header Value
cl.Endpoint.Address = eab.ToEndpointAddress();

// Now do an operation provided by the service.
cl.ProcessInfo("ABC");

What that code is doing is adding an endpoint header named ClientId with a value of OmegaClient to be inserted into the soap header without a namespace.

Custom Header in Client’s Config File

There is an alternate way of doing a custom header. That can be achieved in the Xml config file of the client where all messages sent by specifying the custom header as part of the endpoint as so:

<configuration>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
    </startup>
    <system.serviceModel>
        <bindings>
            <basicHttpBinding>
                <binding name="BasicHttpBinding_IActiveDirectory" />
            </basicHttpBinding>
        </bindings>
        <client>
          <endpoint address="http://localhost:41863/ActiveDirectoryService.svc"
              binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IActiveDirectory"
              contract="ADService.IActiveDirectory" name="BasicHttpBinding_IActiveDirectory">
            <headers>
              <ClientId>Console_Client</ClientId>
            </headers>
          </endpoint>
        </client>
    </system.serviceModel>
</configuration>

The above config file is from a .Net 4.5 client.

Server Identifies Client Request

Finally the web service will read the custom header and distinquish between any WCF client and process it accordingly.

var opContext = OperationContext.Current; // If this is null, is this code in an async block? If so, extract it before the async call.

var rq = opContext.RequestContext; 

var headers = rq.RequestMessage.Headers;

int headerIndex = headers.FindHeader("ClientId", string.Empty);

var clientString = (headerIndex < 0) ? "UNKNOWN" : headers.GetHeader<string>(headerIndex);
Share