Apparently, we were using the package containing the CRT _source code_
to determine the version of the tools.
Also apparently, VS does not guarantee that that package has the same
version as the tools package.
We should use the version of the tools package instead.
As before, a minor refactor:
* I started off by removing the Monarch/Peasant with the goal of moving
it into and deduplicating its functionality with `WindowEmperor`.
* Since I needed a replacement for the Monarch (= ensures that there's
a single instance), I wrote single-instance code with a NT mutex
and by yeeting data across processes with `WM_COPYDATA`.
* This resulted in severe threading issues, because it now started up
way faster. The more I tried to solve them the deeper I had to dig,
because you can't just put a mutex around `CascadiaSettings`.
I then tried to seeif WinUI can run multiple windows on a single
thread and, as it turns out, it can.
So, I removed the multi- from the window threading.
* At this point I had dig about 1 mile deep and brought no ladder.
So, to finish it up, I had to clean up the entire eventing system
around `WindowEmperor`, cleaned up all the coroutines,
and cleaned up all the callbacks.
Closes#16183Closes#16221Closes#16487Closes#16532Closes#16733Closes#16755Closes#17015Closes#17360Closes#17420Closes#17457Closes#17799Closes#17976Closes#18057Closes#18084Closes#18169Closes#18176Closes#18191
## Validation Steps Performed
* It does not crash ✅
* New/close tab ✅
* New/close window ✅
* Move tabs between windows ✅
* Split tab into new window ✅
* Persist windows on exit / restore startup ✅
PackageES is deprecated by known scourge-on-earth OneBranch, and is now
the cause of some non-compliance.
I got permission from them to open-source it, so that's coming next.
For now, we can just depend on a package based on our code based on
theirs.
Tested and working for C++ (DLL, EXE), C#, NuGet and MSIX.
## Summary of the Pull Request
Improves Quick Fix's suggestions to use WinGet API and actually query
winget for packages based on the missing command.
To interact with the WinGet API, we need the
`Microsoft.WindowsPackageManager.ComInterop` NuGet package.
`Microsoft.WindowsPackageManager.ComInterop.Additional.targets` is used
to copy over the winmd into CascadiaPackage. The build variable
`TerminalWinGetInterop` is used to import the package properly.
`WindowsPackageManagerFactory` is used as a centralized way to generate
the winget objects. Long-term, we may need to do manual activation for
elevated sessions, which this class can easily be extended to support.
In the meantime, we'll just use the normal `winrt::create_instance` on
all sessions.
In `TerminalPage`, we conduct the search asynchronously when a missing
command was found. Search results are limited to 20 packages. We try to
retrieve packages with the following filters set, then fallback into the
next step:
1. `PackageMatchField::Command`,
`PackageFieldMatchOption::StartsWithCaseInsensitive`
2. `PackageMatchField::Name`,
`PackageFieldMatchOption::ContainsCaseInsensitive`
3. `PackageMatchField::Moniker`,
`PackageFieldMatchOption::ContainsCaseInsensitive`
This aligns with the Microsoft.WinGet.CommandNotFound PowerShell module
([link to relevant
code](9bc83617b9/src/WinGetCommandNotFoundFeedbackPredictor.cs (L165-L202))).
Closes#17378Closes#17631
Support for elevated sessions tracked in #17677
## References
-
https://github.com/microsoft/winget-cli/blob/master/src/Microsoft.Management.Deployment/PackageManager.idl:
winget object documentation
## Validation Steps Performed
- [X] unelevated sessions --> winget query performed and presented
- [X] elevated sessions --> nothing happens (got rid of `winget install
{}` suggestion)
Same justification as #17749.
We will revert this when either OneBranch Custom Pools become
fit-for-purpose or they upgrade to VS 17.11. Or the heat death of the
universe.
Now that the store displays changelogs, it seems unfair for us to not
put something in here.
These are intended to give a rough idea, not to be perfect, as they are
not the product of my hours of changelog writing (since I am lazy and
put that off until the day of release 🫣)
Thanks to a string of compiler bugs, we had to use an older container
image that shipped with VS 17.9.
Unfortunately, that container image is falling further and further out
of date. The build agents don't cache it any longer, so they spend 30-45
minutes of every build pulling it from the registry.
With the changes to ConPTY in #17510 removing the need for til::bitmap,
we no longer need to work around the compiler bugs it exposed.
Furthermore, 17.10.6+ has a much more robust and presumably "working"
compiler.
`nuget restore` actually runs through MSBuild! However, #15855 added a
dependency from our project on a system-installed _or locally detected_
`vcpkg.targets` (or `.props`).
Our build runs `nuget restore` before finding or installing vcpkg, so
the rules in our project file would try to import vcpkg before it had
been found (or installed).
On build agents with vcpkg installed via the VS workload, this was fine:
we would import the one that came with VS and go on our merry way. On
build agents where it needs to be installed locally, it could not be
imported.
The fix in this PR is to install/bootstrap vcpkg before running nuget.
I tried to isolate the vcpkg rules to only run _in the absence of
nuget_, but that didn't work.
This pull request removes the following vendored open source, in favor
of getting it from vcpkg:
- CLI11 2.4
- jsoncpp 1.9
- fmt 7.1.3
- gsl 3.1 (not vendored, but submoduled--arguably worse!)
Now that Visual Studio 2022 includes a built-in workload for vcpkg, the
onboarding process is much smoother. Terminal should only require the
vcpkg workload.
I've added some build rules that detect vcpkg via VS and via the user's
environment before falling back to a location in the source tree. The CI
pipeline will fall back to installing and bootstrapping vcpkg in
dep/vcpkg if necessary.
Some OSS has not been (and will not be) migrated:
- wyhash: ours is included directly in til/hash
- pcg_random: we have a stripped down copy compared to vcpkg
- stb_rect: vcpkg only ships *all of STB*; ours is a stripped down copy
- chromium numerics: vcpkg does not ship Chromium, especially not this
tiny fraction of Chromium
- dynamic_bitset and libpopcnt: removing in #17510
- interval_tree: no vcpkg equivalent
To support the needs of the inbox Windows build, I've split up our vcpkg
manifest into dependencies for all projects and dependencies just for
Terminal. To support this, we now offer a `terminal` feature. The vcpkg
rules in `common.build.pre.props` are set up to turn it on, whereas the
build rules we eventually write for the OS will not be.
Most of the work is concentrated in `common.build.pre.props`.
This fixes some more issues not properly covered by #17526:
* Fixed `_locComment_text` comments being effectively ignored.
* Fixed line splitting of comments (CRLF vs LF).
* Fixed BOM suppression.
* Fixed support for having multiple `{Locked=...}` comments.
* Modified `Generate-PseudoLocalizations.ps1` to find the .xml files.
(As opposed to .resw for the other translations.)
* Added support for the new format by adding new XPath expressions,
and stripping comments/attributes as needed.
* Fixed `PreserveWhitespace` during XML loading.
* Fixed compliance with PowerShell's strict mode.
## Validation Steps Performed
Ran it locally and compared the results. ✅
We have to run in an older OneBranch Windows container image due to
compiler bugs.
This change prevents us from having to wait for the container image to
download for build legs that _aren't_ using the compiler.
This allows us to remove the dependency on the `Terminal.Internal`
repository.
I have also added some parameters to the build pipeline to ease testing.
This also updates the localization pipeline to check in translations for
the PDPs.
Right now, the primary source for PDPs is the Terminal.Internal
repository. They are submitted from there, and pulled back in as though
they were destined for the internal repo. We rename them on disk prior
to loc check-in to pretend they live in this repo.
Once I submit a change request to the Touchdown team to update the paths
in their backend, I will follow up with another pull request that
updates the remaining build steps to account for that.
This centralized all our ESRP calls in one file, which will make it
easier in the future when we are invariable required to change how we
call it again.
This required me to push a bunch more parameters through the build
pipeline, but it gave me the opportunity to define them as variables
that can be set at queue time.
This is required for us to move off Entra ID Application identity.
(cherry picked from commit 2e7c3fa3132fa3bff23255480c593cbd3432eee5)
This was approved in #16957, so I will merge with one signoff.
Work is ongoing to remove individually-authenticated service accounts
from some pipelines. This moves us closer to that goal.
Tested in Nightly 2403.28002.
Right now, the localization submission pipeline runs every night and
sends our localizable resources over to Touchdown. Later, release builds
pick up the localizations directly from Touchdown, move them into place,
and consume them.
This allowed us to avoid having localized content in the repository, but
it came with too many downsides:
- Users could not contribute additional localizations very easily
- We use the same release pipeline and Touchdown configuration for every
branch, so strings needed to either slightly match or _entirely match_
across an entire set of active release branches
- Building from day to day can pull in different strings, making the
product not reproduceable
- Calling TDBuild during release builds requires network access from the
build machine (so does restoring NuGet packages, but that's neither
here nor there)
- Local developers and users could not test out other languages
This pull request moves all localization processing into the nightly
build phase and introduces support for checking loc in and submitting a
pull request. The pull request will not be created anew if one already
exists which has not been merged.
Anything we needed to do on release is now done overnight. This includes
moving loc files into the right places and merging the Cascadia
resources with the Context Menu resources (so that we can work around a
relatively lower amount of translations being chosen for the app versus
the context menu; see #12491 for more info.)
There are some smaller downsides to this approach and its
implementation:
- The first commit is going to be huge
- Right now, it only manages a single branch and uses a force push; if a
PR is not reviewed timely, it will be force-pushed and you cannot see
the day-to-day changes in the strings. Hopefully there won't be any.
I've taken great care to ensure that repeated runs of this new pipeline
will not result in unnecessary whitespace changes. This required
changing how we merge ContextMenu.resw into CascadiaPackage to always
use the .NET XmlWriter with specific flags.
NOTE that this does not allow users to _contribute_ translation fixes
for the 10 languages which we are importing. We will still need to pull
changes out of those files and submit them as bugs to the localization
team separately, and hope they come back around in another nightly
build. However, there is no reason users cannot contribute
_non-Touchdown_ languages.
Welp, would you look at that? We never actually supported "canary"
feature settings. Canary's been defaulting to the "Dev" config since
inception.
There's a couple things that are supposed to only be on in Dev and not
Canary. They clearly haven't mattered, but better safe than sorry!
This pull request removes the need for the SettingsModel tests to run in
a UAP harness and puts them into the standard CI rotation.
This required some changes to `Run-Tests.ps1` to ensure that the right
`te.exe` is selected for each test harness. It's a bit annoying, but for
things that depend on a `resources.pri`, that file must be in the same
directory as the EXE that is hosting the test. Not the DLL, mind you,
the EXE. In our case, that's `TE.ProcessHost.exe`
The bulk of the change is honestly namespace tidying.
Co-authored-by: Mike Griese <migrie@microsoft.com>
Co-authored-by: Leonard Hecker <lhecker@microsoft.com>
Due to things outside our control, sometimes the Package phase fails
when VPack publication is enabled. Because of this, symbols won't be
published. We still want these builds to be considered "golden" and we
are still shipping them, so we *must* publish symbols.