Il 2021-02-25 22:35 Stephen John Smoogen ha scritto:> Mainly because customers don't want to pay for that work which is
> considerable. If Red Hat builds it, it is expected to have all kinds of
> 'promises' equivalent to its other products and that is expensive
in
> terms
> of QA, engineering, documentation, various certifications, etc. Package
> growth goes up quickly so if people are complaining about the cost of a
> RHEL license for 4000 src rpms, then what would it be at 20,000 to
> 30,000.
> It is easier to allow the community to choose to do the work it wants
> and
> then 'consumers' of said repository get what they can.
[Including Valeri] I doubt it. Price is mainly defined by offer and
demand (which is, in turn, driven by how much value the customer put
behind the product). While production/support cost can put a lower bound
on it, I don't think this is the case for Red Hat.
Anyway, Red Hat did not have to supporto EPEL packages the same as core
one - even a best effort support would be enough in many cases. The
point is that EPEL does not only contains nice-to-have packages, but
sometimes provide really important packages.
As an (outdated) example, a decade ago Cyrus with mysql backend was only
provided by EPEL. For a more up-to-date example, consider how difficult
is to run a "plain" CentOS 7 system - it misses monit, mbuffer, smem,
x2go, various Perl modules, etc.
You EPEL volunteers do an outstanding works - I would really than for
all you did (free of charge) for the CentOS community. Red Hat should
recognize and support your works.
> I think the industry is entering another crux point where
'classical'
> system administration will be in the same class as mainframe/miniframe
> system administration were in the late 1980's and early 1990's with
> Unix
> systems and then Linux. Our wor will remain incredibly important to
> various
> industries but it will increasingly be a smaller amount of 'total
> deployments'. Which is why so many of our conversations echo so much
> of
> the USEnet in the early-1990s, where mainframes/miniframes admins
> wondered
> why companies were not focusing on their industries anymore.
Well, in a sense, the new cloud frenzy is something similar to a "remote
mainframe" used with a new type of thin client (the browser). Yeah, I
know this is a very stretched analogy...
I should say that I saw so many services deployed "to the cloud" that
are plain broken/misbehaving that it sometime worries me. My (naive?)
impression is that we are switching from "few specific services which
correctly work unless something bad happen" to a
"mess-which-more-or-happens-to-works but nobody know what to do if it
does not" model.
I recently debugged an IPSec tunnel between an on-premise appliace and a
Azure VPN services. The on premise appliance has extensive log and
inspection tools, while on the Azure side we had litelly *nothing*. An
Azure consultant was taken on board to help with specifig Powershell
sniplet, to no avail. After 7+ days, a 3rd level Microsoft support
engineer change a *private* setting on that VPN gateway service and the
tunnel started working correctly.
On another case, a Win2008R2 machine stopped working in a AWS instance.
No console, no logs. After 2+ weeks of paid "gold/premium" support
from
an Amazon enginer, my customer simply decided to detach the virtual disk
and to attach it to another machine, reinstallaing the server.
Are we sure this is the way to go?
Don't get me wrong - "the cloud" is the natural places for things
as web
and mail server. A virtualized domain controller? Mmm... not so much.
But hey - I understand this is not going to change. The very same CentOS
switch was done to please the cloud vendors, which will have a more
"dynamic" base to rebuild. But I don't like how Red Hat does not
simply
produce a different product or profile for the cloud needs, rather than
actively adding complexity at every layer.
Regards.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8