It certainly makes sense ... but there is at least one notable and readily identifiable problem with your methodology.
You're implicitly assuming that the probability of a home team winning by 1 run, conditioned on it winning the game at all, is invariant with respect to its raw game win probability.
Generally speaking, this is untrue. In reality the greater a team's expected game win probability the lower its conditional 1-run win probability.
This doesn't imply your analysis useless, however, but I would be hesitant to place much stock in its conclusions for games with odds deviating substantially from average.
On another note, the fastest way to determine the edge on a bet is to multiply the decimal odds by the expected win probability and then subtract 1.
So for a 43.36% win probability at a line of , the edge would be:43.36% * 2.22 - 1 = -3.74%
And for a 64.95% win probability at a line of , the edge would be:64.95% * 1.6061 - 1 = +4.31%
Now because only one of the two edges is positive, it's obviously easy to determine which would be the superior bet. If, however, both edges were positive with the longer odds bet at higher edge, you'd probably want to evaluate the two bets in terms of utility gained rather than simply in terms of EV.