Archive for the ‘Tribal Knowledge’ Category.

Entity Framework Stored Procedure Instructions

This is a how-to on getting Entity Framework (EF) version 5 and 6 to include stored procs and how to consume the resulting entities in code.

  1. In the EF designer choose `Update Model From Database`.
  2. When the page to `Choose Your Database Objects and Settings` comes up which allows one to add new tables/views/stored procs, select the stored proc of interest. Remember the name for the resulting data mapping entity will be the name with the extension `_Result`.2015-06-12_19-10-39
  3. Once the wizard is finished EF will contain the stored proc in the `Model Browser`. The model browser can be displayed by right clicking the EF design surface and selecting `Model Browser`.2015-06-12_19-51-20
  4. Here is an explanation of what has happened.
    (1) You have added the stored proc into the `Stored Procedures __abENT__#8260; Functions` as an item of interest.
    (2) EF has created a function import of the stored proc and placed it into `Function Imports`.
    (3) If EF was able to determine the *result set entity* it will most likely be in the `Complex Types` folder.
  5. If the mapping has gone right you should be able to call the stored proc off of the EF context in code and it will return a list of the complex type `xxx_Result`. If it works you will know it, but there could be problems with the mapping.

Mapping Problems and How to Resolve

  • One can delete at anytime the any object in the folders of 1/2 or 3 shown above and regenerate or create a custom mapping. Don’t be afraid to delete.
  • Sometimes very complex stored procs will not divulge the right mapping of the entity in the result set and the resulting complex type will cause failures. One way around that is to create a faux data return in the stored proc which leaves no ambiguity for Ef.
          1. In the database change the stored proc as follows.  Comment out the meat of the result select and replace it with a one-to-one column stub faux select such as this example: “SELECT 1 AS ResultId, ‘Power1’ AS GroupName, ‘Test’ AS Description”. Note to be clear you will need to match every column and name.
          2. In EF’s Model Browser delete all things associated with the stored proc in folders 1, 2 and 3 above.
          3. Regenerate all by using`Update Model From Database`.
          4. Check the results.
  • If the above steps fail one can always create a function mapping by hand. Be careful not to create duplicates, if so delete all and start over.
        • Open up and find the stored proc you inserted into folder #3 above. Right click and select`Add Function Import…`2015-06-12_20-09-09
        • One can get the column information, change items on the above screen; it is trial and error.
        • You will need to play around with this until you have the right columns for your function import. Be wary of numbered copiesof the complex types which may be created from the mapping.

Remember to reset the stored proc back to its original state instead of the faux stub mentioned.

Share

C# Tribal Knowledge: Use of the Conditional Attribute for Conditional Compilation For Object and and Argument Validation during Development

stockxpertcom_id14589271_jpg_013ced4118ce4854e1935aeaa12684ec(Update 10/9/2019 Changed Debug to DEBUG.)

This is another one of of my topics which I deem to be Tribal Knowledge type information. For me it seems that such information seems be known by only the privileged few. In this article I present for your reading perusal details on how to handle different state errors found in classes and arguments during debug/development activities. The premise is that such checks for object or argument correctness is only needed for development and not for production level code. By using the Conditional attribute in ones C# code such checks can be utilized for object and state correctness but not be a burden in a production program or web site.

Development State Errors Checking

For example, and yes its a basic example,  say one is writing a database layer and the requirement is that the connection string be be properly filled with with a database name a user name and a password. You happen to know that this code will be reused by others on the development staff and they will most likely fail to provide such values the first time they hook up the code. So you don’t have to go to their desks to hand hold and debug the error, wouldn’t it be nice if the object checked its own state of correctness?

Once working the checks will really become superfluous and will be removed. This scenario speaks to the fact that the user has two options, or two roads to failure, of loading the values. Say it can be done either during construction or after via the exposed properties. Just ripe for someone to forget to do one or the other.

Code Speaks Conditionally

For the code we will create a connection object which checks for the validity, to the best of its ability, of those values before their use, and if a problem exists throw an application exception during development time only.

Here is our code and the highlight lines are related to the state checking:

using System.Diagnostics;

public class ConnectionManager
{
    public string DatabaseName { get; set; }
    public string UserName     { get; set; }
    public string Password     { get; set; }

    public ConnectionManager() { }

    public ConnectionManager( string databaseName, string userName, string password )
    {
        DatabaseName = databaseName; UserName = userName; Password = password;
    }

    public string GenerateConnectionString()
    {
        // This is only called during debug builds and *not compiled*
        ValidateState(); // during Release builds.

        return string.Format( "{0};User={1};Password={2}", DatabaseName, UserName, Password );
    }

    [Conditional( "DEBUG" )] // Verify this is uppercase in the project settings.
    private void ValidateState()
    {
        if ( string.IsNullOrEmpty( DatabaseName ) )
            throw new ApplicationException( "Development Error: DatabaseName Empty" );

        if ( string.IsNullOrEmpty( UserName ) )
            throw new ApplicationException( "Development Error: UserName Empty" );

        if ( string.IsNullOrEmpty( Password ) )
            throw new ApplicationException( "Development Error: Password Empty" );

    }

}

So if this class is instantiated and the proper variables are not setup an application exception is thrown during debug mode only when a call to generate a connection string occurs. The magic is done by the highlighted lines but the second one with Conditional attribute tells the compiler to only use this in debug builds

Summary

Now this example is a bit contrived, but the idea is that if one has unique business state requirements which may need to be met before an object’s operation can be used, this methodology can be used to catch all actions, but not hamper runtime operations. It obviously that it shouldn’t be used to catch specific runtime scenarios such as user validation, those should be handled directly and possibly not by generating an exception.

Share

Tribal Knowledge: C# XDocument Copy Xml But Remove the Pesky DocType

stockxpertcom_id7474751_jpg_882845b5f523a87946c1e89ba7bb9621 In another of my series of Tribal Knowledge articles, this one discusses the basics of loading an XDocument and creating a different document from that original.

There may be a need for one to remove the document type from the original XDocument in C#, or do a basic copy and this is presented here.

How-To

Here is the Xml in a classic before:

<?xml version='1.0' encoding='utf-8'?>
<!-- Generator: AVG Magician 1.0.0, AVG Exports Plug-In . AVG Version: 2.00 Build 8675309)  -->
<!DOCTYPE svg PUBLIC '-//W3C//DTD AVG 1.1//EN' 'http://www.w3.org/Graphics/AVG/1.1/DTD/avg11.dtd'[]>
<avg version='1.1' id='Layer_1' x='0px' y='0px' xml:space='preserve'>
    <rect x='100.143' y='103.714' fill='none' width='87.857' height='12.143' />
</avg>

and this is what we want to achieve:

<?xml version='1.0' encoding='utf-8'?>
<avg version="1.1" id="Layer_1" x="0px" y="0px" xml:space="preserve">
    <rect x="100.143" y="103.714" fill="none" width="87.857" height="12.143" />
</avg>

Since we only care about the AVG node, its the main root, we will simply get that node and append it to our new clone. Here is the full code:

string xml = @"<?xml version='1.0' encoding='utf-8'?>
<!-- Generator: AVG Magician 1.0.0, AVG Export Plug-In . AVG Version: 2.00 Build 8675309)  -->
<!DOCTYPE svg PUBLIC '-//W3C//DTD AVG 1.1//EN' 'http://www.w3.org/Graphics/AVG/1.1/DTD/avg11.dtd'[]>
<avg version='1.1' id='Layer_1' x='0px' y='0px' xml:space='preserve'>
    <rect x='100.143' y='103.714' fill='none' width='87.857' height='12.143' />
</avg>";

XDocument loaded = XDocument.Parse( xml, LoadOptions.SetLineInfo );

XDocument clone = new XDocument( new XDeclaration( "1.0", "utf-8", "yes"),
    loaded.LastNode
    );

Console.WriteLine( clone );

The above achieves the after Xml which we seek, no DocType, we didn’t add it and no first node which is the comment line. I hope this little example helps.

Share

Tribal Knowledge: Linq Entities To Do Your Bidding via the Partial Keyword in C#

TribalKnowledge2Having premade entities created during the Linq process in .Net really comes in handy. But did you know that one can extend those entities further outside of of the generated dbml file? By using the concept introduced in C# 2.0 of a partial class, where one can have business logic methods and other properties which are not generated but are none the less relevant in an Object Oriented fashion it can be achieved.

As mentioned the entities are generated and nothing is worse than having the generator throw away user code placed in the generated file after a regeneration. To allow for generated and user code to coexist hand in hand was accomplished by partial classes. That is where the partial keyword is appended to a class which allows for the class to have a subset of functionality in a different code file. The generated code file resides in one file and the user code lives elsewhere in a different file allowing them co-exist.

I have used this process to put validation logic on an entity. That logic was used to verify that the differing properties which the user had entered values against matched business rules. Those rules could not be entirely expressed in a database schema. It was very Handy!

Example

A user posted a question on the MSDN Forums as to how one could achieve indexing into a Linq generated class entity where each property represented a numerated indexed field. The below code shows the use of a partial class in a separate file which the user could  have an index-able entity.

public partial class Request
{
    public int this[int index]
    {
        get
        {
            int retValue;

            switch ( index )
            {
                case 0: retValue = Val1; break;
                case 1: retValue = Val2; break;
                case 2: retValue = Val3; break;
            }

            return retValue;
        }
        set
        {
            switch (index)
            {
                case 0: Val1 = value; break;
                case 1: Val2 = value; break;
                case 2: Val3 = value; break;
            }
        }
    }
}
Share

Tribal Knowledge: Asp.net ListView Template Which Contains a CheckBox. How to Get an ID from its OnChange Event in C#

TribalKnowledge

This article discusses the use of the attribute DataKeyNames of the ListView which can hold individual IDs for each row and how when using a CheckBox and processing an individual click, to get that info during the OnChange event. That ID is not directly supplied to the OnChange event as easily as done when directly working with ListView events. That ID, specific to the actual data is needed so an can update  of status can be sent to the target database item which it represents.

Setup

In setting up the ListView’s HTML I provide a quick example from a previous article here (Quick Asp.Net ListView Template). The item to take out of it is the property DataKeyNames.  For this article I have a Nationalities (USA, France, Argentina…) table which has two boolean fields of Archive and Exempt which are represented as checkboxes within each row of the ListView. The ListView will know that each row’s ID maps to the column NatID field which is the primary key id of the data item. Here is the html which specifies “NatID” as the id column in the data during the databind:

<asp:ListView ID="lvNationalities"
              DataKeyNames="NatID"
              runat="server"
              OnPagePropertiesChanged="UpdateNationalities"
              ItemPlaceholderID="PlaceHolder1">

In the code behind my dynamic Linq entity looks like this and the dynamic entity contains a NatID which is the items primary key id and it has our properties for the checkbox states for archived and exempt.

var natgeos = from nat in _gtdc.Nationalities
              select new
              {
                  NatID    = nat.Nationality_ID,
                  Archived = nat.Archived,
                  Exempt   = nat.Exempt,
                  Name     = nat.Text
              };

lvNationalities.DataSource = natgeos;
lvNationalities.DataBind();

So NatID is pulled from the database item as shown above. Visually we will have to CheckBoxes for Archived and Exempt as mentioned for the user to change. Each change needs to be handled by the checkbox and the database immediately updated.  Here is a screen shot:

NationalitiesListView

CheckBox

Below is the code which specifies the specific CheckBox event. Looking at the archive CheckBox we handle an OnCheckedChanged event:

<asp:CheckBox ID="cbArchived"
              runat="server"
              AutoPostBack="true"
              Checked='<%# Eval("Archived") %>'
              OnCheckedChanged="ArchiveChanged"
              Text="Archived"/>

Because we are not using the command methodology of the ListView, when we handle the event of the checkbox we don’t have direct access to which row data has been checked. To do that we will dereference some items which will get us to the NatID of row item which was clicked. Here is the code to do that:

protected void ArchiveChanged( object sender, EventArgs e )
{

CheckBox target = sender as CheckBox;

if ( target != null )
{
    ListViewDataItem di = target.NamingContainer as ListViewDataItem;

    if ( di != null )
    {
        int id = (int)lvNationalities.DataKeys[di.DataItemIndex]["NatID"];

        _gtdc = new GTCDataContext( Utilities.GetConnectionString() );

        var item = _gtdc.Nationalities.Where( it => it.Nationality_ID == id ).FirstOrDefault();

        if ( item != null )
        {
            item.Archived = target.Checked;
            _gtdc.SubmitChanges();
        }
    }
}

}

The first step is to convert the sender object to a CheckBox. Once that is successful we use the NamingContainer property off of the CheckBox. That property allows us to access the parent item which is the ListViewDataItem. That is our link back to the ListView. With that we can divine the ID needed from the ListViewDataItem’s index into the DataKeys cache for that row’s ID. Once gotten we can update the database as shown by setting the Archive attribute for the Nationalities for the user selected item.

Share

Tribal Knowledge: Large Object Heap and Double Arrays in .Net

(Update 4/3/2011: .Net 4 Relevancy)

In the .Net world when objects are greater than 85 thousand bytes they are generally placed in the Large Object Heap (LOH). But there is a caveat to that and certain objects are placed in the LOH way before 85K. The one situation I detail here in this blog is the double array which goes in way sooner.

Don’t believe me? Try this code snippet:

byte[] Lesthan85K = new byte[84987];
byte[] At85K = new byte[85000];

Console.WriteLine("Lesthan85K:     " + GC.GetGeneration(Lesthan85K)); // 0
Console.WriteLine("At85K:          " + GC.GetGeneration(At85K));      // 2

double[] array999Double = new double[999];

Console.WriteLine("array999Double: " + GC.GetGeneration(array999Double)); // Returns 0

double[] array1Kdouble = new double[1000];

Console.WriteLine("array1Kdouble:  " + GC.GetGeneration(array1Kdouble)); // Returns 2

By looking at the garbage collection generation, we can identify if objects reside the LOH or not. If it is generation 2 then bingo we are in the LOH!

What we see is that a double array resides in the LOH way before the 85 thousand byte yardstick. I had reported this as a bug to Microsoft connect  Large Object Heap (LOH) does not behave as expected for Double array placement and was told; this is by design.

It appears that because of performance issues, it is better to insert the double arrays earlier due to the fact that the LOH is aligned in 8 byte increments. This allows better access to those large arrays in normal code processing and the tradeoff point was 1 thousand doubles and was implemented in .Net that way.

That is my piece of Tribal Knowledge that I pass on to you…use it wisely.

Update for .Net 4 and 64 Bit and Future Versions of .Net

Recently I got a chance to explore why this had not changed for .Net 4 and the realm of 64 bit computing when I was at the Microsoft Campus talking to some of the people on the CLR team.  One of the Senior Project Managers who works with the CLR team at Microsoft Andrew Pardoe was kind enough to provide this clarification that things are still the same for .Net 4 and 64 bit applications for arrays of 1000 doubles; but that may change going forward:

The CLR’s execution engine aggressively attempts to place these arrays on the LOH because of the performance benefit of accessing aligned doubles. However, there’s no benefit to applying this heuristic on 64-bit architectures because doubles are already aligned on an 8-byte boundary.

Thank you for bringing this to my attention. We’ve made changes to this heuristic that should appear in a future release of the .NET Framework.

Share

When not to use the Double Type in .Net : Financial Calculations!

Most developers, myself included, have fallen into the trap of using the double or float type when doing financial calculations and have discovered that values get out of sync with the balance sheet pretty quick. The short answer is to always use the precise type of decimal to do calculations and avoid double and float. For example take this snippet of code:


double n1 = 1134.266;
double n2 = 1134.0;

double result1 = n1 - n2;

Console.WriteLine(result1.ToString()); // Result: 0.266000000000076

Where did the extra 76 come in? Using decimal will fix that issue. The failure of double comes into play when thinking of fractions such as 1/3 + 1/3 + 1/3 which we all learned in grammar schools is 1. When that is represented as .3333 + .3333 + .3333 … well that equals .9999 which is not 1. Do you want to be on the losing end of that financial calculation everytime you withdrawal money from your bank?

So in the future when should one use the appropriate non integer type?

  • Decimal for Financial calculations when exactness in calculations is needed.
  • Float is used for scientific calculations when range is more important than precision up to seven significant digits.
  • Double to be used when the float range of seven significant figures fails and the need up to sixteen significant digits is needed.
Share