Archive for category Things to Remember
The World’s Most Over Engineered BBQ Thermometer
Posted by sgtcodeboy in Computer Code, Things to Remember, Uncategorized on February 6, 2020
So let me tell you a story. Our story starts on Christmas morning, 2018. I was hosting, my family had spent the night at my house christmas eve. Being foodies we were beyond excited for the succulent 8lb prime grade prime rib that we would be cooking that day, outside on the large big green egg on my deck.
We had an amazing Christmas with presents galore. The grill was at the perfect temperature, the $160 cut of prime grade beef was prepped and it was time to cook. It all went without a hitch as I loaded the grill up and placed the dual temperature probes – one in the meat one on the grill itself. We waited, and waited. After about five hours it seemed like it should be done but the thermometer i was using at the time insisted no that it was not done. So we waited and waited longer. Finally being very curious why it was taking so long I grabbed another instant read thermometer and checked it myself. At 120F we were going to remove it from the grill and let it gradually come up to a perfect rare doneness.
When I plunged the thermometer into the roast I was horrified to see it was already at 142f. It was overcooked by 22 degrees. My thermometer probe had failed and could not be trusted any more. Something had to be done before next christmas.
But what to do? As I stood in shock from the tragedy that had befallen us a plan suddenly became clear.
There is only one thing to do. Only one course of action that would be possible. Build a Mult-Tenant Kubernetes Based BBQ Thermometer in Golang.
Why had I not seen the need for this before? By capturing more than one probe’s worth of data for each data source (two for grill, two for meat) then one could not fail without leaving detectable evidence of the failure. Also, by tracking historical data and combining it with ambient weather conditions if both probes did fail you could compare past cooks with historical ones and against historical weather conditions to tell when the time was getting abnormally long for a cook.
That was just the start though, eventually I would be able to build in AI to detect an event hours before it happened by comparing the temperature curve with historical data. Gradient descent might be a simple way to do it.
It needed to be run in Kubernetes because I would need the scalability of being able to go out to five thousand nodes. We take our bbq very seriously.
In the coming months I would build the thermometer. Its not done yet, the code is all here to be reviewed but there is more work to be done. I have gained a lot of insights by looking at the temperature curve of a cook. Specifically for the rather long cooks – Boston Butt or Brisket. By graphically analyzing the temperature curve I’m able to spot the stall and to see that sometimes it happens sooner than the 160f that its supposed to. Accurately detecting this can cut hours off of a cook.
I’m actively working on this all the time. The project is hosted in github: https://github.com/ssargent/bbq and uses Kubernetes, Golang, ReactJS, Redis, Postgres, Typescript and Javascript. If you’d like to know more come check out the page on github.
The specified cast from a materialized ‘System.Int64’ type to the ‘System.Int32’ type is not valid
Posted by sgtcodeboy in ASP.NET CSharp, Computer Code, Things to Remember on October 30, 2013
I ran into a sql related .net error today. If you read the title of the post you’ve probably guessed what it is. If not here’s the error:
The specified cast from a materialized ‘System.Int64’ type to the ‘System.Int32’ type is not valid.
Google was marginally helpful, but if you’re like me when you google an error you’re hoping for that post that says: If you see ABC, then you have done XYZ, do 123 to fix the error. In this case the error is saying there’s a datatype problem. Something about a Long and an Int. My app only has ints. No longs in the schema at all. This specific error came out of an entity framework method, so I couldn’t easily pinpoint it to a given column. 98% of my app is pure entity framework, mostly code-first (though I do write out transactional schema patches to update the database in a scripted manner.) There is one stored procedure in the app, and this stored proc I had just changed to add some new features specifically paging from the sproc.
In this case, the sproc looked like this:
Reports_MyReport SomeGuidID
it returned data of the report, field1,field2, field3 etc..
My change was to add paging directly to the sproc, reduce the amount of data leaving the box as this report was going to get hit a lot.
Reports_MyReport SomeGuid, PageNumber, PageSize
it returns data like RowNumber, Field1, Field2, Field3, TotalRows
I tested out the changes, they worked great, no nulls where they weren’t expected.
Upon running the new sproc through my app, i got the error listed above. It turned out that my sproc, which had code like this:
select RowNumber, Field1, Field2, Field3, @totalRows as TotalRows ….
was the culprit. @totalRows was being interpeted as a int64, as that was comming from an @@ROWCOUNT function. I know i’ll never have more than int32 rows in that table, so for me switching by casting to Int32 solved the problem:
select RowNumber, Field1, Field2, Field3, cast(@totalRows as int) as TotalRows ….
Problem solved, error gone!
Hopefully by the time I have completely forgotten about this, and make the exact same mistake again – in six months – this post will be living in the googles. Hopefully this helps someone else as well.
Scott
Cleaning Up with MSpec
Posted by sgtcodeboy in ASP.NET CSharp, Computer Code, Things to Remember on May 15, 2013
I use MSpec for testing my code. I love the behavior driven approach, for me it just makes sense. I love how if my boss asks where are we with component XYZ, I can just run all my tests and give him the output. It shows what’s working and what’s not. Further more, we can say or make a rule that software doesn’t have a feature until there’s an mspec test saying that it does.
I was recently working with mspec doing integration tests – these I usually do to make sure my DAL and my database are structurally compatible – and I kept getting database constraint errors when I reran tests. It didn’t make a lot of sense as I had a cleanup section in my code and I wasn’t seeing any errors.
It turns out, that if an exception is thrown in the cleanup section you’ll never hear about it. At least for me, it doesn’t bubble up. Once I put a breakpoint on the first line of the cleanup I figured it out. Previously I was thinking it wasn’t even hitting my cleanup code. It was hitting the cleanup section however, only there was an error in that section. Hopefully this gets into the googles and helps someone.
using Machine.Specifications; using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace My.Project.Integration.Tests { public class when_creating_a_mything_record { protected static IMyThingService MyThingService { get; set; } protected static MyThing MyThing { get; set; } protected static MyThing SavedMyThing { get; set; } Establish context = () => { MyThing = new MyThing() { Name = "thing", Description = "thing one" }; MyThingService = ServiceLocator.Instance.Locate<IMyThingService>(); }; Because of = () => Exception = Catch.Exception(() => { SavedMyThing = MyThingService.Insert(MyThing); }); It should_not_have_thrown_an_exception = () => Exception.ShouldBeNull(); It should_have_an_id_that_does_not_match_guid_empty = () => SavedMyThing.ID.ShouldNotEqual(Guid.Empty); Cleanup after = () => { // If this does not appear to get called. put a breakpoint here. You may have an exception. MyThingService.Delete(SavedMyThing); }; } }
Git Helpful Hint – Just Trust Remote branch
Posted by sgtcodeboy in Computer Code, Git Tips, Things to Remember on April 10, 2013
At work I’m often working with others. However at times we’ll be working in our own projects (read git repositories) and then working together in common shared projects. The repository that I’m the main developer on and all the shared ones are always up to date but there repositories that I don’t often commit to can find itself lagging quite far behind. Recently I saw a large number of merge failures when trying to get the latest version of one of these repositories.
Essentially what I wanted to do was to say hey Git, I don’t work on this repository often so trust everything that is coming from upstream and overwrite my stuff. There were many things that the googles suggested that I could do, but quite a few wouldn’t work in the midst of a merge failure that had already occurred.
This stackoverflow post did work though.
Here’s how you do it:
git fetch --all
git reset --hard origin/master
Be warned, this essentially tosses out all your local changes. So make sure your situation is like mine or similar before doing it. I figured if I blogged about this, it would help me remember the next time it came up.