Re: OT: git or hg? Why?

Git all the way. You can use GitHub, GitLab, or host your own GitLab.

On Sat, Mar 18, 2017 at 3:16 PM wrote:

> well that is actually a ridiculous question. mercurial is a dead end. It
> has been superseded by git. hg would be an imposed legacy code base
> requirement, like the awful p4, not a choice.
>
> In other words: use the git, luke.
>
> Oh and git is pretty fabulous, although it does take a while to git the
> hang of it. Plus there are a wealth of resources out there for hosting your
> repos in the cloud, for hosting them internally, of integrating your SCCS
> with code review, bug tracking, agile development, automated building and
> testing, and automated deployment if the spirit moves you.
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list online at: <
> http://www.osronline.com/showlists.cfm?list=ntdev&gt;
>
> MONTHLY seminars on crash dump analysis, WDF, Windows internals and
> software drivers!
> Details at http:
>
> To unsubscribe, visit the List Server section of OSR Online at <
> http://www.osronline.com/page.cfm?name=ListServer&gt;
></http:>

I’ve used P4, and I must say, I liked it very much. But distributed is the
new way. One of my clients uses SVN, and I despise using it. Even with the
shell add-ons, it still sucks.

Git has a learning curve; no doubt. It requires you to think about things a
little differently. But, it is the global standard now, and knowing, and
using it is probably a good idea.

– Jamey

On Sun, Mar 19, 2017 at 1:52 PM wrote:

> >I just read a quote that called hg “the Betamax of DVCSs” which I liked…
> >
> >P
>
> From what I remember… Mercurial was started as a backup plan in case
> things will go awry with git. People were once scared by the GPL. Managers
> believed that GPL can jump out of the git, infect all their dear
> proprietary software, and crazy things would break out.
>
> The world has changed a lot since then, crazy things indeed broke out -
> but GPL was not one of them. Or the least of them. Many years later, we
> see Microsoft feasting on Github, git built into VS, and bash baked into
> Windows 10 /* how they closed the deal with the exclusive owner of both
> Linux and git names? A mystery! */
>
> So what happened to Hg… Most of its goodies went into git, or
> transformed into user-friendly procedures (such as “Git Flow”). Some
> features went lost (such as sequential version numbers) but that is
> compensated by UI. Git itself has evolved. So, there’s nothing to regret.
> Besides of few nights spent installing and learning Mercurial))
>
> There still is something interesting and IMHO worth seeing: Fossil. It is
> very small and cute SCM, packed into just one exe file. And the whole
> repository is packed into a single database file. You can carry everything
> on a flash drive. Otherwise, it looks quite similar to git - or Mercurial.
> It even can be its own web server, like Mercurial does. I use it for small
> personal projects and for quick sprints at work.
>
> Regards,
> – pa
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list online at: <
> http://www.osronline.com/showlists.cfm?list=ntdev&gt;
>
> MONTHLY seminars on crash dump analysis, WDF, Windows internals and
> software drivers!
> Details at http:
>
> To unsubscribe, visit the List Server section of OSR Online at <
> http://www.osronline.com/page.cfm?name=ListServer&gt;
></http:>

I’ve used quite a few like many others. Perforce at command line is not intuitive, so is clear case. For GIT, it could be difficult to use from command line. Yeah it is distributed and the background was open source developers are distributed, and keeping track of changes was difficult from single point server when a change in comment could create a transaction that needs server’s attention. Also when some shops customized for having review process before checked in code ( the process has definite merits ) but the workflow gets in your way w/o warnings. So that leads to command line interference to coerce GIT/Perforce etc.

In the past when I started using SVN switching from CVS I had at the same feeling - I GUESS, developers don’t want to be bothered too much about innards of these systems, and that is at least my mindset.

BitKeeper was not distributed, and was on its way to commercialize. that is when GIT effort started. It is not a brand new idea though. Someone in NZ had it chalked out the plan and a prototype before GIT was born.

If a source control systems have hundreds of commands, it is not for me. GUI based intuitive clients are all I depend on. GIT has few flavors like P4. But distributed management helps the repository it makes the merging quite bit more difficult if some one stays on her local cloned repository.

-pro

On Mar 19, 2017, at 12:03 PM, Jamey Kirby wrote:
>
> I’ve used P4, and I must say, I liked it very much. But distributed is the new way. One of my clients uses SVN, and I despise using it. Even with the shell add-ons, it still sucks.
>
> Git has a learning curve; no doubt. It requires you to think about things a little differently. But, it is the global standard now, and knowing, and using it is probably a good idea.
>
> – Jamey
>
>
> On Sun, Mar 19, 2017 at 1:52 PM > wrote:
> >I just read a quote that called hg “the Betamax of DVCSs” which I liked…
> >
> >P
>
> From what I remember… Mercurial was started as a backup plan in case things will go awry with git. People were once scared by the GPL. Managers believed that GPL can jump out of the git, infect all their dear proprietary software, and crazy things would break out.
>
> The world has changed a lot since then, crazy things indeed broke out - but GPL was not one of them. Or the least of them. Many years later, we see Microsoft feasting on Github, git built into VS, and bash baked into Windows 10 /* how they closed the deal with the exclusive owner of both Linux and git names? A mystery! */
>
> So what happened to Hg… Most of its goodies went into git, or transformed into user-friendly procedures (such as “Git Flow”). Some features went lost (such as sequential version numbers) but that is compensated by UI. Git itself has evolved. So, there’s nothing to regret. Besides of few nights spent installing and learning Mercurial))
>
> There still is something interesting and IMHO worth seeing: Fossil. It is very small and cute SCM, packed into just one exe file. And the whole repository is packed into a single database file. You can carry everything on a flash drive. Otherwise, it looks quite similar to git - or Mercurial. It even can be its own web server, like Mercurial does. I use it for small personal projects and for quick sprints at work.
>
> Regards,
> – pa
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list online at: http:>
>
> MONTHLY seminars on crash dump analysis, WDF, Windows internals and software drivers!
> Details at http:>
>
> To unsubscribe, visit the List Server section of OSR Online at http:>
> — NTDEV is sponsored by OSR Visit the list online at: MONTHLY seminars on crash dump analysis, WDF, Windows internals and software drivers! Details at To unsubscribe, visit the List Server section of OSR Online at</http:></http:></http:>

Excellent writeup Jan. I’ll buy one. Can I get two?

On Sun, Mar 19, 2017, 6:54 PM Jan Bottorff wrote:

> I use a git friend, SourceTree. It’s free, it’s awesome, and if you need
> to integrate to a real project management system you can purchase Jira.
>
> An incredibly better architectural feature of git (and friends) is file
> identity is tracked based on content (a hash) not a filename. This means
> you can take a tree of files, move them around, rename them, and do a
> commit, and git perfectly keeps the history of each file. Lots of version
> control systems handle file renames/directory structure changes poorly. In
> git, the directory structure is more an attribute of a commit. This also
> means you get almost perfect deduplication of identical files.
>
> Another feature of git is often you will clone the whole repository when
> making a local copy. This means you have the whole history in your local
> machine, so you are not stuck twiddling your thumbs if you need to look at
> history and the central repository server is inaccessible. You also can do
> commits, or make branches, or whatever, without a connection to a central
> server, and then later push the commits when the server is accessible
> again. This means you can make a branch, change code, make nice bite size
> commits to that code, all while flying over the ocean in a plane or
> vacationing far away from the world (or not).
>
> Other pluses of git and friends, VisualStudio has built in support with a
> GUI interface. The GUI interface is not as good as SourceTree, except in
> the one case where you want to see the deltas for a Unicode file, VS
> transparently displays it and SourceTree views them as binary not text. Git
> and friends also don’t need a remote repository server, you can have local
> repositories. Actually, git always uses a local repositories, and you may
> or may not sync to a remote repository. For almost every code project I
> make, even “experiments”, I immediately create a local git repository to
> track changes, which takes like 30 seconds or less of work. This allows you
> to snap the current state, or make a branch, try some experiment, and then
> discard the changes if they were a bad idea.
>
> Even better, SourceTree (and perhaps git) has a facility called cherry
> picking, which means you can instantly look through the changes to you
> code, and revert or commit individual change blocks, not just the whole
> file. You can have your debug prints in your code, do a cherry pick commit,
> and not commit your debugging code, even though it’s still in your working
> copy. Cherry picking is incredibly useful when you are going along focused
> on change A, and for whatever reason make a change that really should be in
> change B. When you go to commit, you can easily split things into multiple
> commits, so you can keep your commits small and focused on one logical
> change. This is a VAST improvement from where you can only easily commit a
> whole file.
>
> Git and SourceTree are also FAST. Like a couple years ago I checked out
> the Linux kernel source tree (I believe without history) in 10 minutes,
> over the Internet. SourceTree pretty much shows changes instantly.
>
> And still another great architectural feature of git and friends is than
> each commit uses a hash on previous commit hashes, which makes a chain that
> can’t be modified. This means you can’t go back in the history, make a
> change, and have the following commit hashes match. This means git is
> intrinsically protected against tampering of the history, and can also
> easily check the integrity of the history. I’m not sure this level of
> integrity verification would stand up in court, as I suppose you could fake
> timestamps and rebuild the whole history (with different hashes), but if
> you have a commit hash value, that commit hash can practically speaking
> only be created with a specific set of commits.
>
> Git and friends do have a few things I’m not fan of. High on the list is
> there is no incrementing version or commit number. A commit identifier is a
> hash, and has no relationship to the previous commit identifier. This means
> you can’t really use source code commit identifiers as a customer facing
> version identifier. I suppose you can put a commit identifier in your
> binary, and it will refer to the commit that matches the source code the
> binary was built with. You can make your own branches and tags for customer
> facing version numbers. It’s sometimes a little annoying that I can’t fix a
> typo in a previous commit comment, but that’s just part of the architecture
> than assures repository integrity, so I have accepted it.
>
> What’s odd is I was not such a fan of git when I first started to use it a
> tiny bit. I’m not sure I would be such a fan if I didn’t have the
> SourceTree interface available. Now, I better understand some of the vastly
> better architectural features of git, so am a huge fan. Did I also mention
> the git core is open source, and there are multiple clients available from
> multiple vendors, using the same repository format, so I think chances are
> very good any repository will not become inaccessible because some vendor
> stopped supporting a product. I would be sad if SourceTree went away.
>
> Jan
>
> On 3/18/17, 11:03 AM, “xxxxx@lists.osr.com on behalf of
> xxxxx@osr.com” > xxxxx@osr.com> wrote:
>
> So, colleagues… which DVCS wins for you? Why? Why would choosing
> the other one be a major mistake?
>
> Tell us your tales… No votes for P4 or CVS allowed. Just git and hg,
> please.
>
> Peter
> OSR
> @OSRDrivers
>
>
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list online at: <
> http://www.osronline.com/showlists.cfm?list=ntdev&gt;
>
> MONTHLY seminars on crash dump analysis, WDF, Windows internals and
> software drivers!
> Details at http:
>
> To unsubscribe, visit the List Server section of OSR Online at <
> http://www.osronline.com/page.cfm?name=ListServer&gt;</http:>

That is because you are confusing commits to your local repo with commits
to origin. If you use one of the well thought out git workflows, all of
your development is in feature branches on local developer repos, the
commit history of those feature branches is generally of interest only to
the developer, and the developer should be free to amend or squash those
commits as appropriate. What is of interest to everyone else beside the
developer is the merge request for that feature branch into one of the
mainline branches on origin (typically master or develop depending on which
workflow you are using.) That merge request should be as clean as possible.
Nobody else really cares about the detours and missteps in the commit
history of your local repo’s feature branch that got you to the point where
you are ready to push your changes, they care about the diffs between your
feature branch and its target branch.

If you are mucking with the commit history of origin branches you are doing
something dreadfully wrong.

Mark Roddy

On Sun, Mar 19, 2017 at 11:49 PM, wrote:

> Love your write-up, Jan. Thanks.
>
> About this, though:
>
>

[quote]

> It?s sometimes a little annoying that I can?t fix a typo in a previous
> commit
> comment, but that?s just part of the architecture than assures repository
> integrity, so I have accepted it
>
>
> On of the things I hate about git is that the architecture does nothing to
> ensure repo integrity. Wanna re-write history? Change the commits that
> were actually done to make the process more aesthetically pleasing? No
> prob… it’s git.
>
> You can re-write any commit message you want:
>
> You can use “commit --amend” for the last commit.
>
> For older commits, you can take out the big “rebase -i” hammer
>
> (I would be happier if git HAD no “rebate” command… but that’s life)
>
> Peter
> OSR
> @OSRDrivers
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list online at: http:> showlists.cfm?list=ntdev>
>
> MONTHLY seminars on crash dump analysis, WDF, Windows internals and
> software drivers!
> Details at http:
>
> To unsubscribe, visit the List Server section of OSR Online at <
> http://www.osronline.com/page.cfm?name=ListServer&gt;
></http:></http:>

Mark Roddy wrote:

That is because you are confusing commits to your local repo with
commits to origin. If you use one of the well thought out git
workflows, all of your development is in feature branches on local
developer repos, the commit history of those feature branches is
generally of interest only to the developer, and the developer should
be free to amend or squash those commits as appropriate. What is of
interest to everyone else beside the developer is the merge request
for that feature branch into one of the mainline branches on origin
(typically master or develop depending on which workflow you are
using.) That merge request should be as clean as possible. Nobody else
really cares about the detours and missteps in the commit history of
your local repo’s feature branch that got you to the point where you
are ready to push your changes, they care about the diffs between your
feature branch and its target branch.

That, sir, is a matter of opinion on which developers most certainly do
not agree. There is incredibly valuable historical information
contained in the “detours and missteps in the commit history”, and the
philosophy that those detours do not belong in the global history is, in
my opinion, misguided. A year from now, when an unrelated developer
asks “why did you do that?” and the original developer is gone, there
may be no way to answer that question.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

“There is incredibly valuable historical information
contained in the “detours and missteps in the commit history”, and the
philosophy that those detours do not belong in the global history is, in
my opinion, misguided. A year from now, when an unrelated developer
asks “why did you do that?” and the original developer is gone, there
may be no way to answer that question.”

This statement will be only supported for those programmers who supplement version control with #ifdefs. Why delete dead code when we can #ifdef it out? Or have two versions of a function “before” and “after”, and to hell with diff.

Well of course you could insist that no developer ever amend or squash any
commits on their feature branches. You could also insist that they commit
every change they make to their feature branch, no matter how trivial,
stupid, misguided or irrelevant, on the off chance that searching the
entrails of their development efforts might contained this alleged buried
treasure.

Or you could consider feature branch merge requests as what is of
interest. Those merge requests should of course be code reviewed and if it
is a complete f’ing mystery how the developer got to the particular
solution they came up with, an explanation should be recorded in the
review. Since you are using git with all of its quite nice server side
enhancements from gitlab or bitbucket or other vendors for automated and
integrated code review and issue tracking, all of that information will be
recorded with the merge request.

I think there is a lot of historical baggage in the words “commit” and
“branch” and “merge” that confuse people when they first start using git.
It certainly confused me for a while. In the central repo model all commits
were sacred, branches were rare and merges were nightmares. In git
branching and merging are commonplace. Commits are always local not global,
until they are pushed upstream. If you are pushing commits directly to your
main branches, you are doing it wrong. You should be using feature
branches. Feature branches are short lived - days not weeks or months, they
encapsulate a single bug fix, or work for one story in an iteration. The
interesting history is in the merge of that feature branch back into the
development main branch, not in the edits the developer made along the way.

Mark Roddy

On Mon, Mar 20, 2017 at 2:28 PM, Tim Roberts wrote:

> Mark Roddy wrote:
> > That is because you are confusing commits to your local repo with
> > commits to origin. If you use one of the well thought out git
> > workflows, all of your development is in feature branches on local
> > developer repos, the commit history of those feature branches is
> > generally of interest only to the developer, and the developer should
> > be free to amend or squash those commits as appropriate. What is of
> > interest to everyone else beside the developer is the merge request
> > for that feature branch into one of the mainline branches on origin
> > (typically master or develop depending on which workflow you are
> > using.) That merge request should be as clean as possible. Nobody else
> > really cares about the detours and missteps in the commit history of
> > your local repo’s feature branch that got you to the point where you
> > are ready to push your changes, they care about the diffs between your
> > feature branch and its target branch.
>
> That, sir, is a matter of opinion on which developers most certainly do
> not agree. There is incredibly valuable historical information
> contained in the “detours and missteps in the commit history”, and the
> philosophy that those detours do not belong in the global history is, in
> my opinion, misguided. A year from now, when an unrelated developer
> asks “why did you do that?” and the original developer is gone, there
> may be no way to answer that question.
>
> –
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list online at: http:> showlists.cfm?list=ntdev>
>
> MONTHLY seminars on crash dump analysis, WDF, Windows internals and
> software drivers!
> Details at http:
>
> To unsubscribe, visit the List Server section of OSR Online at <
> http://www.osronline.com/page.cfm?name=ListServer&gt;
></http:></http:>

You’ve made a very well-reasoned case. This case even makes “rebase” sound like something that’s not inherently evil.

I appreciate the time you put into those two replies.

I’m not saying I’m convinced. I’m just saying I find your argument strangely enlightening.

Peter
OSR
@OSRDrivers

Thinking in the paradigm is key to successful use of any of these tools.

I?m note sure that I like the git paradigm, but it seems inevitable that it will be a fact of life in the same way that G3 OO languages are a fact of life regardless of what one thinks about their design principals.

Agreement as to which method is the best will be as elusive as in the C vs C++ debates I think

Sent from Mailhttps: for Windows 10

From: Mark Roddymailto:xxxxx
Sent: March 20, 2017 7:13 PM
To: Windows System Software Devs Interest Listmailto:xxxxx
Subject: Re: [ntdev] OT: git or hg? Why?

Well of course you could insist that no developer ever amend or squash any commits on their feature branches. You could also insist that they commit every change they make to their feature branch, no matter how trivial, stupid, misguided or irrelevant, on the off chance that searching the entrails of their development efforts might contained this alleged buried treasure.

Or you could consider feature branch merge requests as what is of interest. Those merge requests should of course be code reviewed and if it is a complete f’ing mystery how the developer got to the particular solution they came up with, an explanation should be recorded in the review. Since you are using git with all of its quite nice server side enhancements from gitlab or bitbucket or other vendors for automated and integrated code review and issue tracking, all of that information will be recorded with the merge request.

I think there is a lot of historical baggage in the words “commit” and “branch” and “merge” that confuse people when they first start using git. It certainly confused me for a while. In the central repo model all commits were sacred, branches were rare and merges were nightmares. In git branching and merging are commonplace. Commits are always local not global, until they are pushed upstream. If you are pushing commits directly to your main branches, you are doing it wrong. You should be using feature branches. Feature branches are short lived - days not weeks or months, they encapsulate a single bug fix, or work for one story in an iteration. The interesting history is in the merge of that feature branch back into the development main branch, not in the edits the developer made along the way.

Mark Roddy

On Mon, Mar 20, 2017 at 2:28 PM, Tim Roberts > wrote:
Mark Roddy wrote:
> That is because you are confusing commits to your local repo with
> commits to origin. If you use one of the well thought out git
> workflows, all of your development is in feature branches on local
> developer repos, the commit history of those feature branches is
> generally of interest only to the developer, and the developer should
> be free to amend or squash those commits as appropriate. What is of
> interest to everyone else beside the developer is the merge request
> for that feature branch into one of the mainline branches on origin
> (typically master or develop depending on which workflow you are
> using.) That merge request should be as clean as possible. Nobody else
> really cares about the detours and missteps in the commit history of
> your local repo’s feature branch that got you to the point where you
> are ready to push your changes, they care about the diffs between your
> feature branch and its target branch.

That, sir, is a matter of opinion on which developers most certainly do
not agree. There is incredibly valuable historical information
contained in the “detours and missteps in the commit history”, and the
philosophy that those detours do not belong in the global history is, in
my opinion, misguided. A year from now, when an unrelated developer
asks “why did you do that?” and the original developer is gone, there
may be no way to answer that question.


Tim Roberts, xxxxx@probo.commailto:xxxxx
Providenza & Boekelheide, Inc.


NTDEV is sponsored by OSR

Visit the list online at: http:

MONTHLY seminars on crash dump analysis, WDF, Windows internals and software drivers!
Details at http:

To unsubscribe, visit the List Server section of OSR Online at http:

— NTDEV is sponsored by OSR Visit the list online at: MONTHLY seminars on crash dump analysis, WDF, Windows internals and software drivers! Details at To unsubscribe, visit the List Server section of OSR Online at</http:></http:></http:></mailto:xxxxx></mailto:xxxxx></mailto:xxxxx></https:>

Rebase is all what you make of it, git’s greatest strength or weakness,
depending on what you do with it.

I agree with Mark about the distributed model making some traditional
practices around branches et c. not important at best and problematic at
worst. At least I think that’s what he’s saying. Nobody except the dev
who wrote the commits in a feature branch will ever care or likely even
understand them once they’re pushed - squash them before you push.

If you use git like a centralized source control system, the flexibility it
provides will seem pointless and overly complicated, IMO.

The two things I would suggest anyone considering git do before they run
with it (aside from consider integration & plug-ins, which will always
favor git these days) is reverting a complicated merge, after you push it,
then make some more changes and remerge; and understand the problems with
rebasing work others have consumed.

Those are the two git issues that most people encounter, IMO. Neither is
complicated, just different.

mm

On Mar 20, 2017 5:36 PM, wrote:

>


>
> You’ve made a very well-reasoned case. This case even makes “rebase” sound
> like something that’s not inherently evil.
>
> I appreciate the time you put into those two replies.
>
> I’m not saying I’m convinced. I’m just saying I find your argument
> strangely enlightening.
>
> Peter
> OSR
> @OSRDrivers
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list online at: http:> showlists.cfm?list=ntdev>
>
> MONTHLY seminars on crash dump analysis, WDF, Windows internals and
> software drivers!
> Details at http:
>
> To unsubscribe, visit the List Server section of OSR Online at <
> http://www.osronline.com/page.cfm?name=ListServer&gt;
></http:></http:>

xxxxx@broadcom.com wrote:

Why delete dead code when we can #ifdef it out?

Sometimes there is no option but to delete it. Code evaluators hate dead code, and regard “#if 0” and “#ifdef NEVER_DEFINED” as terminal code damage.

Seems that Mr. Boyce missed the implied tag around Mr Grig’s description of some programmer’s approach.



Peter
OSR
OSRDrivers

Rebase is the greatest thing since sliced bread. When I do a feature or a fix, it may require long time to finish, and during that time the codebase may advance a lot. A fix is usually a single commit, anyway. A feature could include multiple commits. Before I push the result, I rebase it on current top. Any merge conflicts are fixed for each commit in the chain. Each commit compiles.

Also, if I encounter any drive-by fixes and/or improvements, each goes to a separate commit.

“git add -p ” command is a great tool to separate chaffs from wheats.

Do you know you can selectively apply a diff from another version of a file to the working copy by “checkout -p” command?

indeed - you own your local commits. They are for your use. You own your
local branches too, they have no existence on the server until you push
that branch up to the server. It becomes of interest when you choose to
make it available on your server.

I could go on. What makes all of this git shit very interesting is what has
happened in terms of CI tools on the server side. By using feature branch
workflows (and there are of course many) you can plug your development
processes directly into the CI pipelines tools now available from both
bitbucket and gitlab/hub. (bitbucket seems to be the me-too here, gitlab
innovates, bitbucket adopts.)

For example, what we did with our projects on our gitlab server is to plug
feature branches into a pipeline that performs automated validation testing
on every push, and that requires that to pass and for code review to be
complete before that branch can be merged into one of the main branches.
The main branches themselves are of course running automated and
comprehensive build testing on every new commit.

Mark Roddy

On Tue, Mar 21, 2017 at 8:18 AM, wrote:

>


>
> Seems that Mr. Boyce missed the implied tag around Mr Grig’s
> description of some programmer’s approach.
>
>


>
> Peter
> OSR
> OSRDrivers
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list online at: http:> lists.cfm?list=ntdev>
>
> MONTHLY seminars on crash dump analysis, WDF, Windows internals and
> software drivers!
> Details at http:
>
> To unsubscribe, visit the List Server section of OSR Online at <
> http://www.osronline.com/page.cfm?name=ListServer&gt;
></http:></http:>

Ugh. This could easily and quickly turn into a debate of the cost/benefit of particular hosting services. Which would by necessity, in turn become a debate about bug tracking software. Which would rapidly devolve into a debate about agile methodology, kanban boards, HipChat, and other forms of foolishness (he said, choosing the nicest possible word he could think of). Which would very quickly result in me filling my posts with curse words and locking the thread.

So, I’m not going there.

OK… On a more positive note: Let’s hear it for Jenkins!

The actual logistics depends on your workflow. Right now, we’re looking at pushing to a build server that’ll trigger the build, CA, and tests… including pushing tests off onto target hardware platforms. Then, assuming tests all pass, have the build server push the results to our origin.

We are, ahem, “quite far” from making that happen. As in “we don’t have the build server provisioned yet.” But, let’s ignore the details.

What would be helpful to the community as a whole would be for somebody to provision a Windows VM with Jenkins and the EWDK, and some tools to allow the basic driver tests to run (like, for example, devfund) and make this work available to the community. Too many people in this space work in their own little worlds and never share. It’s always been that way in the world of drivers. I’ve never understood it. Perhaps the community is has the same percentage of “people who share” as other communities, but suffers because it’s factors of magnitude smaller.

Peter
OSR
@OSRDrivers

You are worried about people who never share in the driver community? Welcome to the financial industry where each firm generally believes that they have specific and compelling unique knowledge. This belief is not generally based on any fact, and certainly not on any education, buy usually based on the need to eat by scraping pennies off of someone else?s boots

Re your specific problem, if you contact me offline, I can probably arrange something that might work for al of us.

I must put it on record that I abhor the kaban board and all the foolishness that goes with it

Sent from Mailhttps: for Windows 10

From: xxxxx@osr.commailto:xxxxx
Sent: March 22, 2017 2:42 PM
To: Windows System Software Devs Interest Listmailto:xxxxx
Subject: RE:[ntdev] Re: OT: git or hg? Why?



Ugh. This could easily and quickly turn into a debate of the cost/benefit of particular hosting services. Which would by necessity, in turn become a debate about bug tracking software. Which would rapidly devolve into a debate about agile methodology, kanban boards, HipChat, and other forms of foolishness (he said, choosing the nicest possible word he could think of). Which would very quickly result in me filling my posts with curse words and locking the thread.

So, I’m not going there.



OK… On a more positive note: Let’s hear it for Jenkins!

The actual logistics depends on your workflow. Right now, we’re looking at pushing to a build server that’ll trigger the build, CA, and tests… including pushing tests off onto target hardware platforms. Then, assuming tests all pass, have the build server push the results to our origin.

We are, ahem, “quite far” from making that happen. As in “we don’t have the build server provisioned yet.” But, let’s ignore the details.

What would be helpful to the community as a whole would be for somebody to provision a Windows VM with Jenkins and the EWDK, and some tools to allow the basic driver tests to run (like, for example, devfund) and make this work available to the community. Too many people in this space work in their own little worlds and never share. It’s always been that way in the world of drivers. I’ve never understood it. Perhaps the community is has the same percentage of “people who share” as other communities, but suffers because it’s factors of magnitude smaller.

Peter
OSR
@OSRDrivers


NTDEV is sponsored by OSR

Visit the list online at: http:

MONTHLY seminars on crash dump analysis, WDF, Windows internals and software drivers!
Details at http:

To unsubscribe, visit the List Server section of OSR Online at http:</http:></http:></http:></mailto:xxxxx></mailto:xxxxx></https:>

gitlab community edition is free and featured. You can standup a server in
house in a few hours, or you can spin up a docker container in more like
one hour.

You really should do yourself a favor and check it out.

Mark Roddy

On Wed, Mar 22, 2017 at 2:43 PM, wrote:

>


>
> Ugh. This could easily and quickly turn into a debate of the cost/benefit
> of particular hosting services. Which would by necessity, in turn become a
> debate about bug tracking software. Which would rapidly devolve into a
> debate about agile methodology, kanban boards, HipChat, and other forms of
> foolishness (he said, choosing the nicest possible word he could think
> of). Which would very quickly result in me filling my posts with curse
> words and locking the thread.
>
> So, I’m not going there.
>
>


>
> OK… On a more positive note: Let’s hear it for Jenkins!
>
> The actual logistics depends on your workflow. Right now, we’re looking
> at pushing to a build server that’ll trigger the build, CA, and tests…
> including pushing tests off onto target hardware platforms. Then, assuming
> tests all pass, have the build server push the results to our origin.
>
> We are, ahem, “quite far” from making that happen. As in “we don’t have
> the build server provisioned yet.” But, let’s ignore the details.
>
> What would be helpful to the community as a whole would be for somebody to
> provision a Windows VM with Jenkins and the EWDK, and some tools to allow
> the basic driver tests to run (like, for example, devfund) and make this
> work available to the community. Too many people in this space work in
> their own little worlds and never share. It’s always been that way in the
> world of drivers. I’ve never understood it. Perhaps the community is has
> the same percentage of “people who share” as other communities, but suffers
> because it’s factors of magnitude smaller.
>
> Peter
> OSR
> @OSRDrivers
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list online at: http:> showlists.cfm?list=ntdev>
>
> MONTHLY seminars on crash dump analysis, WDF, Windows internals and
> software drivers!
> Details at http:
>
> To unsubscribe, visit the List Server section of OSR Online at <
> http://www.osronline.com/page.cfm?name=ListServer&gt;
></http:></http:>

To add to what Mark said: You can stand up a GitLab server at DigitalOcean
in less than five minutes.

https://www.digitalocean.com/community/tutorials/how-to-use-the-gitlab-one-click-install-image-to-manage-git-repositories

On Thu, Mar 23, 2017 at 9:05 AM Mark Roddy wrote:

> gitlab community edition is free and featured. You can standup a server
> in house in a few hours, or you can spin up a docker container in more
> like one hour.
>
> You really should do yourself a favor and check it out.
>
> Mark Roddy
>
> On Wed, Mar 22, 2017 at 2:43 PM, wrote:
>
>


>
> Ugh. This could easily and quickly turn into a debate of the cost/benefit
> of particular hosting services. Which would by necessity, in turn become a
> debate about bug tracking software. Which would rapidly devolve into a
> debate about agile methodology, kanban boards, HipChat, and other forms of
> foolishness (he said, choosing the nicest possible word he could think
> of). Which would very quickly result in me filling my posts with curse
> words and locking the thread.
>
> So, I’m not going there.
>
>


>
> OK… On a more positive note: Let’s hear it for Jenkins!
>
> The actual logistics depends on your workflow. Right now, we’re looking
> at pushing to a build server that’ll trigger the build, CA, and tests…
> including pushing tests off onto target hardware platforms. Then, assuming
> tests all pass, have the build server push the results to our origin.
>
> We are, ahem, “quite far” from making that happen. As in “we don’t have
> the build server provisioned yet.” But, let’s ignore the details.
>
> What would be helpful to the community as a whole would be for somebody to
> provision a Windows VM with Jenkins and the EWDK, and some tools to allow
> the basic driver tests to run (like, for example, devfund) and make this
> work available to the community. Too many people in this space work in
> their own little worlds and never share. It’s always been that way in the
> world of drivers. I’ve never understood it. Perhaps the community is has
> the same percentage of “people who share” as other communities, but suffers
> because it’s factors of magnitude smaller.
>
> Peter
> OSR
> @OSRDrivers
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list online at: <
> http://www.osronline.com/showlists.cfm?list=ntdev&gt;
>
> MONTHLY seminars on crash dump analysis, WDF, Windows internals and
> software drivers!
> Details at http:
>
> To unsubscribe, visit the List Server section of OSR Online at <
> http://www.osronline.com/page.cfm?name=ListServer&gt;
>
>
> — NTDEV is sponsored by OSR Visit the list online at: MONTHLY seminars
> on crash dump analysis, WDF, Windows internals and software drivers!
> Details at To unsubscribe, visit the List Server section of OSR Online at</http:>

Marion Bond wrote:

I must put it on record that I abhor the kaban board and all the
foolishness that goes with it

Although it’s possible to go overboard, I have found Kanban boards to be
a great way of tracking progress. I have never been known for my great
memory (I forget why), so I need some kind of a “to do” list to keep me
on track. As long as I take a bit of time to chop a large project into
bite-sized tasks, updating a Kanban board is both useful and satisfying,
because it gives you concrete feedback on what’s been done.

Your mileage may vary, of course, but for me it’s a great tool.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.