The internet as platform?
Everything is Crazy has published an article that asserts
that ever increasing bandwidth will eventually overcome Microsoft's Operating
System monopoly. In other words, the application platform moves from the
Operating System to the Internet itself.
There is some evidence to support the notion that Operating Systems will matter
less and less. Google's Gmail is a tantalizing, but relatively simple
glimpse. Mozilla and Firefox have oft been presented as application platforms
in their own right. Certainly the browser is one of the most utilized
components for any computer user. And while the old "the network is the
computer" campaigns ultimately fizzled, as Everything is Crazy's author notes in a
followup, the bandwidth simply wasn't there.
Here's where the argument falls apart a bit for me:
Most users have no desire to be the system administrators of their machines,
and would gladly turn that task over to someone else for a nominal fee. As
bandwidth increases, telcos, cable companies, and others will be in the
perfect position to become application service providers for the average home
user, and said average home user will gladly accept this, as long as the price
isn't too high. I see this as almost inevitable.
It's true, average joe users are struggling with security pains and becoming
less than happy system administrators. But I just don't see cable companies
and telcos stepping up to this plate. The bottom line, as always, is the
bottom line. The investment to become an application provider would be
substantial. This is particularly evident when you factor in the support
costs. Telco's and cable companies have not been particularly good
at consumer tech support and satisfaction so far.
And I don't see there being a viable return on investment any time soon.
Providers are still looking to maximize their initial investments building and
launching broadband. They are spending most of their time and dollars getting
'triple-play' going to compete with one another while fending off interlopers
such as Vonage and AT&T for voice. The only provider that might
have some ability to test these waters as a variant of the Application Service
Provider is Time-Warner with it's AOL division.
Otherwise, third parties probably have the best possibility of getting into
this sort of game. Will we one day do all of our word processing and
spreadsheet work in a browser rather than a traditional desktop app? Maybe.
Or maybe in two or three years things will be far more different than we imagined
presenting other possibilities for people to get (over)excited about.
[/Computers/Internet/#internet_as_app_platform.html]
Comments (0)
Asking the wrong questions
is the leading cause of wrong answers." That's what my .sig says.
Here's a well-written page on researching and asking technical questions.
The insights esr provides can be applied in other ways as well.
Click here
for more.
[/Computers/Internet/#questions.html]
Comments (0)
TCP/IP class
Way back in 2001, Barry McCormick and I wrote up this document
and taught a two session class for NOLUG on
the basics of TCP/IP. Looking at my web stats lately and after doing some
googling about, I've found it's quite a popular download. Since my site has
been rearranged often, I'm just posting this so it can be found again easily.
While some of the material is slightly outdated, it's still a solid
introduction and Barry and I are pretty proud of the work we put into it.
[/Computers/Internet/#tcp_ip_class.html]
Comments (0)
Using OpenBSD CARP and pfsync for inexpensive firewall/router redundancy
Enterprise network admins are probably familiar with Cisco's HSRP
which allows for router redundancy and VRRP
for firewall redundancy. This article
describes a way to achieve the same thing using features in the
upcoming OpenBSD 3.5 release . Other commercial firewalls certainly
have similar capability. However, OpenBSD's feature set is becoming
rather compelling.
Smaller businesses can certainly find value in such an approach,
keeping their network available and secure at a fraction of the cost.
Evening paying an outside consultant for installation and ongoing
support would be cost effective. Deploy something like this and things remain
comfortable for your cisco-trained network admins.
All of this of course reminds me that I really need to schedule some
time to upgrade my own OpenBSD
firewall.
[/Computers/Internet/Security/#carp_and_pfsync.html]
Comments (1)
Patent Nuttiness
This is a truly
rediculous patent. Apparently a company called ideaflood.com has managed to patent
subdomains.
*boggle*
So if I decide to set up, say, Jennifer.scottharney.com, I'm supposed
to pay a licensing fee to this company. How did they get this patent
in the first place?
Christopher Falkowski, a legal specialist in these topic areas for Bloomfield
Hills, Mich.-based Rader, Fishman and Grauer (raderfishman.com) says a number
of key requirements must be met to obtain a patent, whether that patent is in
the area of Web hosting operations or any other technical field: The invention
must be new or novel. It must be non-obvious. The persons claiming the patent
must be the inventors. And the patent application must be filed within one
year of a public disclosure or sale.
The patent was apparently issued in 1999. One of the first relevant
RFC's I could find is RFC
805 dated 8 February, 1982. Here's the introductory text:
Introduction
A meeting was held on the 11th of January 1982 at USC Information
Sciences Institute to discuss addressing issues in computer mail.
The attendees are listed at the end of this memo. The major
conclusion reached at the meeting is to extend the
"username@hostname" mailbox format to "username@host.domain",
where the domain itself can be further structured.
Hmmm. Besides being an obvious idea, there's clearly prior art.
That's just one RFC out of many and I'm certain there are hundreds of
examples of this use of subdomain naming. Perhaps a search of
The internet archive will provide
some examples.
[/Computers/Internet/#patent_nuttiness.html]
Comments (0)
making picture albums.
I like to use album to make my picture albums
The command to generate the static picture pages is:
~scotth/album/album.pl -medium 50% -medium_type -known_images -theme ../album/Themes/Blue
once i'm in a directory full of pictures. Makes it simple, really.
[/Computers/Internet/Site_Info/#making_albums.html]
Comments (0)
GPG key
I have decided to start using my GPG key again to sign emails and
such. It's a good thing to use encryption and digital signature technology.
Consequently, I've gone ahead and posted my key
here
[/Computers/Internet/Security/#gpg_key_info.html]
Comments (0)
How I post to the blog
I use Blosxom to run this here
blog. It's simple to set up and does everything I need. And it also means
I can write entries in a text editor "like God intended". I looked at some of
the various blog services but those didn't interest me. I have an apache server and
a domain after all. And of course I preferred something Open Source. Blosxom
fits the bill
One way to post is of course using good ole vim/vi inside an ssh session.
But instead I work on my laptop remotely where I have a mirror of my blog
directory structure. I use rsync to sync
up the whole shebang. It looks like this from inside my local mirror of the blog:
rsync --rsh="ssh" -avz --progress . www.scottharney.com:/path/to/blog/scotth
Very quick. And syncing back the other way is just as simple just switching up the
"." and the "www.scottharney.com..." parts. I use "--delete" when I need to prune
up in one direction or the other.
[/Computers/Internet/Weblogs/#rsync.html]
Comments (0)
Barkus pics are up
Pics from this year's Barkus parade are up. Barkus
is a parade just for dogs benefitting the Louisiana SPCA. There are 1500
dogs in the parade. It is enormous and great fun. We had beautiful weather and
enjoyed dressing up as Elvis and his Memphis Mafia to complent the parade's
"TailHouse Rock" theme.
[/Computers/Internet/Site_Info/#barks_pics.html]
Comments (0)
Why I'm doing this and a note on self-censorship
Well, if for no other reason, everyone else is
blogging (I really hate that term, actually). Actually I really needed a home
for some documentation notes for myself. I've always kept little README files
around noting changes I've made and things I discovered but it's really better
to have them in one place like this.
Of course it's also for some amusement. And I have this journalism degree that I
never actually use. Perhaps this will be useful to others as well. I also
will likely cross-post some bits on nolug.org
which is a site I manage as well.
I was also making some notes ranting about issues at work. On reflection,
I decided to remove those for several reasons. Even though I elided all
identifying details, I do work at a government facility and I'm under NDA.
Better safe than sorry. And I also felt maybe I was giving a false view of
my work environment. It's not all 'Dilbert-esque' but that's likely all
that would be posted here.
[/Computers/Internet/Weblogs/#meta1.html]
Comments (0)
Using multiple physical machines behind a single NAT IP address
So you have just one IP address and a bunch of machines behind NAT.
You've got port redirection working so your interal webserver behind
the firewall is serving pages. But now you've got a second box that
you need to host content on. Perhaps you have need to have a separate
webserver running mod_perl and one running php. Or perhaps you've got
(God forbid) an IIS box. And you don't want to redirect alternate
ports. Here's away to have multiple webservers behind a single
external IP address all running on Port 80.
What you need is a reverse inbound proxy established on your firewall.
Apache with mod_proxy built and
enabled does the trick.
First and foremost you need to have your DNS sorted out. I have both
external and internal DNS servers. Bind 9 can do
this with "views" though I personally have a preference for setting up
djbdns. If you do not have
an internal DNS for your domains then you'll need to reference your
internal boxes by IP address in your apache configuration (see below).
The next thing to do is fix your firewall. You need to install apache.
mod_proxy should come with it. You need to stop redirected port 80
inbound in your NAT (aka IP Masq) configuration since the firewall
will now answer on Port 80. Since I have internal DNS servers, I
also made sure my firewall's /etc/resolv.conf pointed to the
internal DNS server.
Now you set up Apache on your firewall. Just do a basic configuration.
Here's the magical lines snipped from httpd.conf
LoadModule proxy_module libexec/apache/libproxy.so
AddModule mod_proxy.c
NameVirtualHost your.external.ip.address
<VirtualHost your.external.ip.address>
ServerAdmin webmaster@yourwebsite.net
ServerName www.yourwebsite.net
ProxyPass / http://www.yourwebsite.net/
ProxyPassReverse / http://www.yourwebsite.net/
ErrorLog /var/log/apache/yourwebsite.net/error_log
TransferLog /var/log/apache/yourwebsite.net/access_log
<VirtualHost>
Since the internal DNS server has a local (ie 192.168.x.x) address
for "www.yourwebsite.net", requests to that NameVirtualHost go
to the appropriate internal box. And it need not be running
apache. Anything that speaks http will be transparently proxied.
If you don't do internal DNS you'd replace
"http://www.yourwebsite.net" with something like "http://192.168.5.80"
where that is the IP of the internal server that you want to answer
for www.yourwebsite.net.
Note that you can also do SSL https connections this way. The key is
that you need to have your SSL certs and keyfiles on the firewall. The
firewall would then speak standard http on port 80 to the internal box
The config looks like this:
<VirtualHost your.external.ip.address:443>
ServerAdmin webmaster@yourwebsite.net
ServerName secure.yourwebsite.net
ProxyPass / http://secure.yourwebsite.net/
ProxyPassReverse / http://secure.yourwebsite.net/
SSLEngine on
SSLCertificateFile /path/to/certfile
SSLCertificateKeyfile /ditto/for/keyfile.key
ErrorLog /var/log/apache/secure.yourwebsite.net/error_log
TransferLog /var/log/apache/secure.yourwebsite.net/access_log
</VirtualHost>
As you can see, it really helps to have internal DNS set up.
That makes things easier and allows you to have NameVirtualHosts
on your internal boxes. You could just to IP based VirtualHosts
internally configuring multiple 192.168.x.x IPs on your internal
servers.
I'm sure you can imagine some very useful ways of doing this.
It makes a test and development environment easy. You can stand up
a replacement website without going through the hastle of waiting for
public DNS to "catch up".
Obviously there are security considerations. I won't go into a major
discussion about that here except to say that you need to think about
it. For my needs, this increased my security posture because I could
move Win2000 machines with many potential vulnerabilities behind the
firewall and reduce exposure to just IIS and cross-site scripting
issues. That's still plenty to worry about, but better than having,
say, MSSQL outside your firewall)
Another implication of this is that your logging of website
connections changes. All that your internal boxes will ever log now
are connections from the firewall. So those logs are useless for
tracking site traffic, etc. But all your hits are logged --
separately the way I configured it -- on the firewall itself. Just make
sure you make those log subdirectories manually before restarting
apache because apache won't create them. The master apache error log
will report this, of course.
References:
1
2
3
[/Computers/Internet/Security/#apache_proxy.html]
Comments (0)
|