Archive for category Computer Code

Tensorflow Error Cannot Find cudnn64_6.dll

I’m trying to dabble in Deep Learning – cause that’s a simple topic you can just dip your toes in right?  Ok well maybe I’m overly optimistic.  But I did run into an error this evening that threw me for a loop.  I’m sure I’ll see this error in a few months on my other machine and be baffled.  On that day I’ll google and hopefully find this post.

The Error:

Traceback (most recent call last):
 File "C:\Program Files\Python 3.5\lib\site-packages\tensorflow\python\platform\self_check.py", line 87, in preload_check
 ctypes.WinDLL(build_info.cudnn_dll_name)
 File "C:\Program Files\Python 3.5\lib\ctypes\__init__.py", line 347, in __init__
 self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] The specified module could not be found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
 File ".\helloworld\world.py", line 4, in <module>
 import tensorflow as tf
 File "C:\Program Files\Python 3.5\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
 from tensorflow.python import *
 File "C:\Program Files\Python 3.5\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
 from tensorflow.python import pywrap_tensorflow
 File "C:\Program Files\Python 3.5\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 30, in <module>
 self_check.preload_check()
 File "C:\Program Files\Python 3.5\lib\site-packages\tensorflow\python\platform\self_check.py", line 97, in preload_check
 % (build_info.cudnn_dll_name, build_info.cudnn_version_number))
ImportError: Could not find 'cudnn64_6.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Note that installing cuDNN is a separate step from installing CUDA, and this DLL is often found in a different directory from the CUDA DLLs. You may install the necessary DLL by downloading cuDNN 6 from this URL: https://developer.nvidia.com/cudnn
D:\source\github.com\ssargent\ml\tensorflow\src [develop +1 ~0 -0 !]> python .\helloworld\world.py
Traceback (most recent call last):
 File "C:\Program Files\Python 3.5\lib\site-packages\tensorflow\python\platform\self_check.py", line 87, in preload_check
 ctypes.WinDLL(build_info.cudnn_dll_name)
 File "C:\Program Files\Python 3.5\lib\ctypes\__init__.py", line 347, in __init__
 self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] The specified module could not be found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
 File ".\helloworld\world.py", line 4, in <module>
 import tensorflow as tf
 File "C:\Program Files\Python 3.5\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
 from tensorflow.python import *
 File "C:\Program Files\Python 3.5\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
 from tensorflow.python import pywrap_tensorflow
 File "C:\Program Files\Python 3.5\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 30, in <module>
 self_check.preload_check()
 File "C:\Program Files\Python 3.5\lib\site-packages\tensorflow\python\platform\self_check.py", line 97, in preload_check
 % (build_info.cudnn_dll_name, build_info.cudnn_version_number))
ImportError: Could not find 'cudnn64_6.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Note that installing cuDNN is a separate step from installing CUDA, and this DLL is often found in a different directory from the CUDA DLLs. You may install the necessary DLL by downloading cuDNN 6 from this URL: https://developer.nvidia.com/cudnn

In my case, I had run a simple hello world program using Tensorflow but found it ran slow.  I believe this is because I’m using the GPU version and its a hello world, if its not doing any real work then its likely faster to use the CPU.  However I figured an update may speed it up.  So after the update I ran the program again and got this error…  Oh shoot…

Well I should have read the message more clearly – this is clearly a #ScottFail.  I searched for cudnn64_6.dll but couldn’t find it.  I had cudnn64_5.dll but not the _6 version.  Googling was suprisingly unhelpful for this.

In my case here’s what this is saying: You have CUDNN 5.1 but you need CUDNN 6.0 – Go download that and try again.  After doing that I got this:

 python .\helloworld\world.py
b'Hello, Tensorflow'

It works!  So don’t be like me and be confused – or be like me an see this blog post.

Leave a comment

The specified cast from a materialized ‘System.Int64’ type to the ‘System.Int32’ type is not valid

I ran into a sql related .net error today.  If you read the title of the post you’ve probably guessed what it is.  If not here’s the error:

The specified cast from a materialized ‘System.Int64’ type to the ‘System.Int32’ type is not valid.

Google was marginally helpful, but if you’re like me when you google an error you’re hoping for that post that says:  If you see ABC, then you have done XYZ, do 123 to fix the error.  In this case the error is saying there’s a datatype problem.  Something about a Long and an Int.  My app only has ints.  No longs in the schema at all.  This specific error came out of an entity framework method, so I couldn’t easily pinpoint it to a given column.  98% of my app is pure entity framework, mostly code-first (though I do write out transactional schema patches to update the database in a scripted manner.)  There is one stored procedure in the app, and this stored proc I had just changed to add some new features specifically paging from the sproc.

In this case, the sproc looked like this:

Reports_MyReport SomeGuidID

it returned data of the report, field1,field2, field3 etc..

My change was to add paging directly to the sproc, reduce the amount of data leaving the box as this report was going to get hit a lot.

Reports_MyReport SomeGuid, PageNumber, PageSize

it returns data like RowNumber, Field1, Field2, Field3, TotalRows

I tested out the changes, they worked great, no nulls where they weren’t expected.

Upon running the new sproc through my app, i got the error listed above.  It turned out that my sproc, which had code like this:

select RowNumber, Field1, Field2, Field3, @totalRows as TotalRows  ….

was the culprit.  @totalRows was being interpeted as a int64, as that was comming from an @@ROWCOUNT function.  I know i’ll never have more than int32 rows in that table, so for me switching by casting to Int32 solved the problem:

select RowNumber, Field1, Field2, Field3, cast(@totalRows as int) as TotalRows  ….

Problem solved, error gone!

Hopefully by the time I have completely forgotten about this, and make the exact same mistake again – in six months – this post will be living in the googles.  Hopefully this helps someone else as well.

 

Scott

 

, , ,

1 Comment

Cleaning Up with MSpec

I use MSpec for testing my code.  I love the behavior driven approach, for me it just makes sense.  I love how if my boss asks where are we with component XYZ, I can just run all my tests and give him the output.  It shows what’s working and what’s not.  Further more, we can say or make a rule that software doesn’t have a feature until there’s an mspec test saying that it does.

I was recently working with mspec doing integration tests – these I usually do to make sure my DAL and my database are structurally compatible – and I kept getting database constraint errors when I reran tests.  It didn’t make a lot of sense as I had a cleanup section in my code and I wasn’t seeing any errors.

It turns out, that if an exception is thrown in the cleanup section you’ll never hear about it.  At least for me, it doesn’t bubble up.  Once I put a breakpoint on the first line of the cleanup I figured it out.  Previously I was thinking it wasn’t even hitting my cleanup code.  It was hitting the cleanup section however, only there was an error in that section.  Hopefully this gets into the googles and helps someone.

using Machine.Specifications;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace My.Project.Integration.Tests
{
    public class when_creating_a_mything_record
    {
        protected static IMyThingService MyThingService { get; set; }

		protected static MyThing MyThing { get; set; }
		protected static MyThing SavedMyThing { get; set; }

		Establish context = () =>
        {
            MyThing = new MyThing() {
				Name = "thing",
				Description = "thing one"
			};
			MyThingService = ServiceLocator.Instance.Locate<IMyThingService>();
        };

        Because of = () => Exception = Catch.Exception(() =>
        {
            SavedMyThing = MyThingService.Insert(MyThing);
        });

        It should_not_have_thrown_an_exception = () => Exception.ShouldBeNull();
        It should_have_an_id_that_does_not_match_guid_empty = () => SavedMyThing.ID.ShouldNotEqual(Guid.Empty);

        Cleanup after = () =>
        {
           // If this does not appear to get called. put a breakpoint here.  You may have an exception.
           MyThingService.Delete(SavedMyThing);
        };
    }
}

Leave a comment

Git Helpful Hint – Just Trust Remote branch

At work I’m often working with others.  However at times we’ll be working in our own projects (read git repositories) and then working together in common shared projects.  The repository that I’m the main developer on and all the shared ones are always up to date but there repositories that I don’t often commit to can find itself lagging quite far behind.  Recently I saw a large number of merge failures when trying to get the latest version of one of these repositories.

Essentially what I wanted to do was to say hey Git, I don’t work on this repository often so trust everything that is coming from upstream and overwrite my stuff.  There were many things that the googles suggested that I could do, but quite a few wouldn’t work in the midst of a merge failure that had already occurred.

This stackoverflow post did work though.

Here’s how you do it:

git fetch --all

git reset --hard origin/master

Be warned, this essentially tosses out all your local changes. So make sure your situation is like mine or similar before doing it.   I figured if I blogged about this, it would help me remember the next time it came up.

Leave a comment

Last Day at Telligent

Today was my last day at Telligent.

My first day was February 4th 2005.  By my math – or by datediff() in SQL – that’s 2,646 days.  I started there as a contract employee wondering how long it would last.  I remember leaving a job that I had been in for about three years when I accepted the position at Telligent.  I remember how scared I was of telecommuting.  No office, no managers to watch over me.  What’s to stop total anarchy and no work?  Well absolutely amazing people, incredible clients and the most challenging work of my career.  That’s what.  When asked by friends how I could work from home for a company so far away, and not just play games all day – they didn’t understand the work was the fun and still is up till this very day.  I worked on amazing projects right up until my final hours at Telligent.  I’ll always be more grateful to Rob Howard and Jason Alexander for giving me the opportunity in those early days.  What started as a small group of .net geeks who sometimes did meetings over xbox live & madden football morphed into something amazing.  An incredible company that I still absolutely love and know will continue to do amazing things.   I’m waiting for the day that I hear Telligent’s name in accolades on CNBC , it will come and I will cheer on that day.   To everyone at Telligent, Thank You so much for the amazing seven years.   I think I got to do almost everything a software developer could do at Telligent.  It wasn’t always easy, was sometimes very hard, but I’d not trade any of it for the world.

I look back at the time with incredible fondness and take away many memories.  I look forward to the very near future when I start my new job at Orcsweb.  I’m leaving one amazing place to go to another amazing place.  I could not be happier and couldn’t think of a better place to move on to.   Expect to hear quite a bit more from me about some of the very cool things I’ll be doing at Orcsweb.  I know I’ve been somewhat silent in the past as I worked on other non technical things, writing etc..  But there definitely will be more posts about amazing technologies and what I’m discovering along the way.

Thank You Everyone.

Scott

5 Comments

Massive Data Access Library

Today I got introduced to Massive.  Massive is a data access library written by @robconery.  My co workers @jaymed and @mgroves introduced me to it, and I’ll admit I was somewhat reluctant to give it a shot.  I’m normally of the opinion that the simplest DAL is just pure simple SqlCommands, SqlConnections and a stored procedure call.  However after using Massive for a few hours, I’m really excited about this new tool in my tool chest.  Of course massive has been out for a while, I’m quite slow to the party sometimes.

Massive is great though, I love how in a single line of code I can have data back.  Alternatively I can update the table, or run any other query that I want.  This library if it did nothing else makes integration testing a breeze.  Usually for my tests I prefer to stub out a DAL repository via the interface/repository pattern.  The tests get a FakeSomethingRepository to use that uses List<TheObject> as its in memory store.  What I love about massive is that my unit tests which are really simple because they can test DataRepository.SomeList  can quickly become Integration tests with almost no additional complexity.  For me that’s Huge.  I’m trying a new thing, and that’s not to check in any code that doesn’t have full unit test coverage and Massive is making that happen in a Massive way.  (couldn’t resist)

I’m still learning about these Dynamic things, but a bit of learning never hurt anyone and I know @mgroves will help me out when I get confused.

@robconery If we ever meet, I owe you a beer! Massive is awesome!

If you’re not familiar with Massive, I urge you to check it out here.

1 Comment

Exciting Technologies CUDA

One of the most exciting new technologies that I’ve worked with in the past year is definitely CUDA.  CUDA is a technology that allows for general purpose programming code – typically in c or c++ – to be compiled for and run on GPU devices.  What that means is the graphics card that’s in your high end laptop or desktop can now run general purpose code as well as the graphics code that its already running.

The potential for CUDA is amazing, it allows for massively parallel processing on the potentially hundreds of cores that are available on modern GPUs.  The laptop I’m writing this post on for instance has a NVIDIA 360M card which has 96 processing cores.  That’s compared to the 4 cores in the i7 chip that’s on the motherboard.  The cores are not truly general purpose, they can’t do everything that a modern CPU core can and they prefer to do work in one or more jobs that can be split across the cores.  Math and Physics simulations work extremely well.   While traditional CUDA is based in C there’s also a library called Thrust that allows for C++ programmers to get in the mix too.  Thrust provides very easy ways of transferring data from main memory to device memory as well as some awesome classes for things like map reduce.

There are a couple of uses that I would like to personally explore with CUDA.  Read only database querying.  I would love to be able to as a sample or research process create a dialect of SQL or a sub-set that allowed me to process simple traditional database queries on a CUDA capable device.  While there are a number of companies doing this sort of work, and probably this is something someone could buy instead of build, I think this would be a great chance to learn by doing.  Imagine if I had a table that was approximately 1GB in size with each row being about 128 bytes, that would be somewhere around 8M records in the table.   This works best if the individual records are numerical based, in otherwords large volumes of text aren’t a perfect fit.  In this case however, each of the 96 cores would have to process only ~90K records whereas the 4 cores of the cpu would have to process ~2M records.  While the table can be indexed in a traditional database system if the query patterns are known in advance, it certainly is exciting thinking about how a large volume of work can be spread against a number of cores  using CUDA.

Why use CUDA for something like this?  Why use it for filtering large sets of data?  Well let’s say that its a lot more than 1GB, let’s also say that the native format of the data is some form of binary structure.  To load 8M rows of data into a database it takes a non-trivial amount of time, and if the dataset is constantly being updated that’s a tax you’ll have to pay for every update.  Whereas a program written to leverage CUDA could likely query it directly and without that tax.   Also, this machine is just a laptop you could relatively inexpensively put together a machine with literally thousands of cores.  Imagine now that you had 2000 cores, with the same 8M rows, that’s only ~4K rows per core to filter.  Now that could be much faster.

This is a technology that I’m interested in and learning more about.  I’m sure that I’ve covered no new ground with this post and experts will probably be bored.  But perhaps there’s someone out there that wasn’t aware of CUDA or is just getting into it?  How are you finding it?  What have you found works, what doesn’t?

, ,

Leave a comment

This is Awesome

Is it just me or does this really happen?

http://www.dilbert.com/strips/comic/2011-02-27/

Love Dilbert.

Leave a comment

Woot First Post!

This is the first post of my new blog! Woo Hoo! I definitely hope that there will be more blogging than I’ve done in the past.  I mean what would be the point of getting a shiny new domain name if there would be no blogging?  Well the questions for philosophy majors aside we are here now, and its time to blog.

Much of this will eventually find its way to the about me page.  Maybe.  Actually no it likely will not so its probably best to pay attention here.  Where else can you get this particular brand of nonsense?

What’s up with this blog?  Well Its a place I’ll talk about code.  Whether that’s code for computers via languages: C/C++, Python, Lua, Ruby, C#, Javascript, or Technologies such as DirectX, ASP.net, Rails.  I love being a computer programmer, code amazes – the amazing creations that can come out of a simple text file, which then convert to a series of 1s and 0s is amazing.

There’s More than Code however

What I love about the code extends beyond the code itself.  I love the creativity.  My creativity flows into other interests like Cooking, Music and Video Games.  Games are code but they’re art too.  Games are just as important to our society as a great painting or story is.

That brings me to the most recent of my passions, writing.  I love writing and why I’ve not been a good blogger before now is perhaps something of a mystery.  Although this is quite a bit more public, open and without-a-net than most of my writing has been.  That’s probably a good thing though.  At least thats what I’m telling myself.  Writing for me is more of an activity than it is an art form.  I believe strongly in what Stephen King said in his book On Writing that good fiction writing must be done from the seat of your pants.   Many people will disagree and I even know some that write a more planned out story and that’s awesome that it works for them, this is what works for me.  That’s part of my secret I think, what works for me.  I don’t write to sell the next best seller, or to amaze my friends and family.  I write to find out what will happen next.

Put a character in place, give them a challenge, set the atmosphere and you’ve created a whole world.  How can you not see what happens by continuing to write.

MythicalCode – my New Blog – Will likely stay with me for a very long time.  The format or incarnation of this blog however will change dramatically.  I decided to get started with wordpress.com, Many thanks for the very reasonable rates!  But the blog itself and its content are likely to move around a bit and eventually to a server of my own perhaps!

As if they’d let me have a server on the intertubes all to myself.

Leave a comment