HomeAboutMailing ListList Chatter /0/0 3.236.156.32

Dell iDRAC & vFlash

2020-08-15 by: David White
From: David White 
------------------------------------------------------
So I just bought a Dell R630 off of eBay and it arrived today.

What are all the features that come with the iDRAC Enterprise, and with
this vFlash card?
I knew that the server came with iDRAC Enterprise, but I've never really
used that before.
I definitely wasn't expecting this 16GB vFlash card...

If I'm reading the following URL correctly, it sounds like I can put 1 (or
more) ISOs directly onto this vFlash card, and that I can use iDRAC to
monitor all of the components of the server -- meaning I don't have to use
the running OS to do the monitoring?

https://www.dell.com/support/manuals/us/en/04/idrac8-with-lc-v2.05.05.05/idrac8_2.05.05.05_ug-v1/key-features?guid=guid-5ab574db-efdf-41ec-b6b3-0f5e705a0e51&lang=en-us

-- 
David White

=============================================================== From: Stephen Kraus ------------------------------------------------------ Basically: Yes. You can use the 16gb as a remote storage accessible via iDRAC, some of the newer Dells have dual SD card slots and you can mirror them together and run ESXi or Xenserver directly off the SD Cards rather than the Disk Array. iDRAC is awesome, lets you do all the power management and, depending on the iDRAC and RAID controller, even do disk management and remote console. Once you partitions the SD Card, you can upload ISOs and boot from them remotely.

=============================================================== From: Zachary Garvelink ------------------------------------------------------ Basically the drac is a server inside of a server. As long as it's powered on it will still work. I have had a server CPU go bad and was able to get the drac to report the problem back. I don't use the vflash so can't really help there but love the Dracs. Remote console has gotten me out of jams a lot. Even setup servers in mainland China with their help. You can remotely mount isos and floppies and interact with the system without an os. I do think vflash let's you mount isos as a cdrom from the local memory vs over the network. Not sure if that helps but thought I would answer back... A joke at work we like to share don't forget to read the dracEula. Thanks, Zac

=============================================================== From: David White ------------------------------------------------------ That's sweet. Thanks to you and Stephen for your responses. -- David White

=============================================================== From: David White ------------------------------------------------------ Uploading a 1.6GB iso to the vFlash card is taking quite a long time. Not sure why it's taking so long, but it's working. What I find incredible is that I turned the server completely off, yet the iDRAC is still responsive - and uploading of the iso is still working. That's amazing! I've seen that the iDRAC Express vs. the iDRAC Enterprise has quite a bit of a price difference when shopping around for this same model server online. I still don't know (I haven't researched) the differences between the two. Does anyone know if the vFlash card can be used with iDRAC Express, and/or if these vFlash cards are interchangeable? i.e. can I move them around to different Dell servers that have vFlash card capability? Ubuntu installed beautifully on this a few minutes ago. My plan is to try to do a dual boot with it -- Ubuntu on 1 partition, and oVirt (upstream project for Red Hat Virtualization) on the other partition. -- David White

=============================================================== From: Stephen Kraus ------------------------------------------------------ Yeah, iDRAC runs as long as there's power, you can power the server on or off with it. The vFlash card is just a standard SD, so as long as its another Dell with SD, it'll work fine.

=============================================================== From: Eric Wolf ------------------------------------------------------ Calvin is Hobbes' best friend

=============================================================== From: Zachary Garvelink ------------------------------------------------------ Express does not provide remote console or remote mounting iso It can monitor and power on off I think we pay about a 300 dollar premium when we order the servers new. Thanks, Zac

=============================================================== From: Eric Wolf ------------------------------------------------------ Does SOL work on express? "Ipmitool sol activate"?

=============================================================== From: Stephen Kraus ------------------------------------------------------ He said he has iDRAC Enterprise, not Express.

=============================================================== From: Lynn Dixon ------------------------------------------------------ As a side note you can buy an enterprise idrac license on eBay for cheap. I=E2=80=99ve done that a few time with great luck, most recently on a 730xd= I picked up a few months back. On Sun, Aug 16, 2020 at 10:40 AM Stephen Kraus wrote: . ing. ver t . n m m the t iDRAC ave to 05.05/idrac8_2.05.05.05_ug-v1/key-features?guid=3Dguid-5ab574db-efdf-41ec-b= 6b3-0f5e705a0e51&lang=3Den-us

=============================================================== From: David White ------------------------------------------------------ I did, but I'm also particularly interested in knowing about the differences. Eventually (hopefully sometime soon), I want to buy another 3-4 of these servers, and setup an oVirt Hyperconverged with Gluster for HA hosting. This first machine I bought off eBay was just for testing, learning, and demo purposes. Other than what I see here, I have no idea what SOL is or why I'd use it. https://www.dell.com/support/manuals/tc/en/tcdhs1/idrac8-with-lc-v2.05.05.0= 5/idrac8_2.05.05.05_ug-v1/configuring-idrac-to-use-sol?guid=3Dguid-5e1b84ac= -219e-4d3a-a558-0ca11368ced0&lang=3Den-us Lynn: That's interesting. I thought I read somewhere that the iDRAC licenses were bound to the service tag. I guess not. I'll keep that in mind if/when I'm ready to purchase additional servers. xd I w. : t king. rver h drom n use don't .05.05/idrac8_2.05.05.05_ug-v1/key-features?guid=3Dguid-5ab574db-efdf-41ec-= b6b3-0f5e705a0e51&lang=3Den-us --=20 David White

=============================================================== From: Eric Wolf ------------------------------------------------------ I was just wondering if anyone knew if SOL worked on Express. I've only ever used Enterprise. The old Solidfire storage product was based on the Dell R620/630 platform. We used IPMITOOL as part of our test automation. -Eric -=3D--=3D---=3D----=3D----=3D---=3D--=3D-=3D--=3D---=3D----=3D---=3D--=3D-= =3D- Eric B. Wolf 720-334-7734 .05/idrac8_2.05.05.05_ug-v1/configuring-idrac-to-use-sol?guid=3Dguid-5e1b84= ac-219e-4d3a-a558-0ca11368ced0&lang=3Den-us on a 730xd . l a erver e , cdrom d an use don't 5.05.05/idrac8_2.05.05.05_ug-v1/key-features?guid=3Dguid-5ab574db-efdf-41ec= -b6b3-0f5e705a0e51&lang=3Den-us

=============================================================== From: Lynn Dixon ------------------------------------------------------ I am a huge ovirt user. Well RHV now since I am a red hatter, ovirt is the upstream for RHV. But we warned. Ovirt hyper converged is shaky at best. You=E2=80=99ll want at least three nodes to run a semi stable gluster clust= er. I would not trust it to any important workloads. But RHV backed by conventional storage (iscsi, nfs, direct attach luns, etc) is Rock solid and well worthy of just about any workload. I=E2=80=99d just most definitely shy away from hyper converged with gluster= , lest you like fixing things all the time. .05/idrac8_2.05.05.05_ug-v1/configuring-idrac-to-use-sol?guid=3Dguid-5e1b84= ac-219e-4d3a-a558-0ca11368ced0&lang=3Den-us on a 730xd . l a erver e , cdrom d an use don't 5.05.05/idrac8_2.05.05.05_ug-v1/key-features?guid=3Dguid-5ab574db-efdf-41ec= -b6b3-0f5e705a0e51&lang=3Den-us

=============================================================== From: David White ------------------------------------------------------ Thanks for the heads up, Lynn. IIRC, no one on the oVirt mailing list gave me that forewarning. https://www.mail-archive.com/users@ovirt.org/msg61594.html I was indeed planning on building a 3-node replicated cluster, but if it's still as problematic as you make it sound, perhaps I should shy away from gluster - or hyperconverged - completely. Would it make any difference in terms of stability if I built and maintained the gluster config separately from oVirt, yet still run it on the same hardware? I guess if I did that, I would do a full install of CentOS 8 onto each node, instead of the minimal oVirt Node install. What would you recommend if I had 3 servers with storage on all 3 servers, and wanted HA? BTW, I'm working towards my RHCSA right now. Now that the exams are available online, I'm hoping to take it in the next couple of months. I just passed the PE124 about a month or two ago. e t. ster. I er, lest . 5.05/idrac8_2.05.05.05_ug-v1/configuring-idrac-to-use-sol?guid=3Dguid-5e1b8= 4ac-219e-4d3a-a558-0ca11368ced0&lang=3Den-us y on a 730xd e. ll l server on the s cdrom r can use I don't 05.05.05/idrac8_2.05.05.05_ug-v1/key-features?guid=3Dguid-5ab574db-efdf-41e= c-b6b3-0f5e705a0e51&lang=3Den-us --=20 David White

=============================================================== From: Lynn Dixon ------------------------------------------------------ I delivered RHHI to customers when I was a consultant with Red hat. RHHI is the Productized version of ovirt hyper converged. It broke constantly, to the point most customers converted to standard ovirt and backed it with iscsi or nfs. Gluster is sort of dying, ceph is getting all the attention from Red Hat engineers which make up the huge majority of engineering hours on both RHV and gluster. s , t t ter ter, lest : r d t. 05.05/idrac8_2.05.05.05_ug-v1/configuring-idrac-to-use-sol?guid=3Dguid-5e1b= 84ac-219e-4d3a-a558-0ca11368ced0&lang=3Den-us . ly on a 730xd ill e el server , on the a cdrom n can use I don't .05.05.05/idrac8_2.05.05.05_ug-v1/key-features?guid=3Dguid-5ab574db-efdf-41= ec-b6b3-0f5e705a0e51&lang=3Den-us

=============================================================== From: David White ------------------------------------------------------ "Gluster is sort of dying, ceph is getting all the attention from Red Hat engineers" LOL. Sounds like Puppet and Ansible! Honestly, I was a little bit surprised to find out Gluster was now a RH product and was still in use. I remember supporting Gluster clusters 10+ years ago (but don't ask me anything about how to do that stuff. I only did it for 6 months as an intern and forgot it all). In your experience, would it be possible to setup Ceph on the 3 nodes and run oVirt beside it on the same hardware? I'm just looking for a way to run a stable, ha cluster, on 3 or 4 servers total. h V y at ster ster, er .05.05/idrac8_2.05.05.05_ug-v1/configuring-idrac-to-use-sol?guid=3Dguid-5e1= b84ac-219e-4d3a-a558-0ca11368ced0&lang=3Den-us s. tly on a 730xd s , till ame model ) on the a cdrom hat I can use g I don't 2.05.05.05/idrac8_2.05.05.05_ug-v1/key-features?guid=3Dguid-5ab574db-efdf-4= 1ec-b6b3-0f5e705a0e51&lang=3Den-us

=============================================================== From: Dave Brockman ------------------------------------------------------ The non-RH answer would be vSphere + VSAN. It does work, and all of my VMware environments have 99% been install and forget as far as the hypervisor/storage/network stack sides of things are concerned. Cheers, -Dave

=============================================================== From: Lynn Dixon ------------------------------------------------------ If your dead seat on gluster id do 4 nodes that way you have reliability if you ever need to take a node down for maintenance. 3 nodes in a gluster cluster is really skirting on thin ice. However ovirt by itself is incredibly rock solid. It=E2=80=99s just the =E2= =80=9Chyper converged =E2=80=9C part is really just ovirt running in a hosted engine co= nfig running on gluster as it=E2=80=99s storage on the same hosts, with a few co= ckpit plugins to make managing gluster easier. You could do a clustered nfs on those hosts, or use some other form of storage that can be presented as an iscsi or nfs, or direct attached luns. I=E2=80=99m 100% confident running ovirt backed by conventional non-gluster= storage is going to be rock steady and will make a fine environment for any production. I am a big fan of ovirt/ RHV hosted engine to also make the webui HA. I=E2=80=99ve helped many large, (fortune 5 size) companies run some very la= rge and specialized GPU workloads on RHV, even migrating their workloads off of vcenter onto RHV. They=E2=80=99re still very happy with it today. Now, there is a lot of good traction around using ceph / cinder as distributed storage for RHV and I=E2=80=99d love to see ovirt going that di= rection to replace gluster. Honestly I love the concept of gluster. I just wish it wasn=E2=80=99t so dang.....sensitive. It sort of got forgotten about when all the new hotness around containers and ceph got all the attention.

=============================================================== From: Stephen Kraus ------------------------------------------------------ XCP-NG is my goto, its the open source fork of Xenserver, and it gives you all the Enterprise features without a license.

=============================================================== From: David White ------------------------------------------------------ Thank you. I'm probably going to go with iSCSI. Lynn, I asked my Red Hat TAM about the difference between Gluster & Ceph in the context of Open Shift, and they basically echoed what you said: That Gluster is being deprecated, and all of the engineering focus is on Ceph these days. On Sun, Aug 16, 2020 at 10:35 PM Stephen Kraus wrote: -- David White

=============================================================== From: Lynn Dixon ------------------------------------------------------ Well, I am a Red Hat Solutions Architect now, and that is exactly the same thing I tell my customers. :-)

=============================================================== From: "Alex Smith (K4RNT)" ------------------------------------------------------ SOL, or Serial Over LAN, allows you to operate a server headless, and have the console redirect to the IPMI system, and give you a console. If you've ever operated a UNIX system headless over a serial console, you know what this is. " 'With the first link, the chain is forged. The first speech censured, the first thought forbidden, the first freedom denied, chains us all irrevocably.' Those words were uttered by Judge Aaron Satie as wisdom and warning... The first time any man's freedom is trodden on, we=E2=80=99re al= l damaged." - Jean-Luc Picard, quoting Judge Aaron Satie, Star Trek: TNG episode "The Drumhead" - Alex Smith - Kent, Washington (metropolitan Seattle area) .05/idrac8_2.05.05.05_ug-v1/configuring-idrac-to-use-sol?guid=3Dguid-5e1b84= ac-219e-4d3a-a558-0ca11368ced0&lang=3Den-us on a 730xd . l a erver e , cdrom d an use don't 5.05.05/idrac8_2.05.05.05_ug-v1/key-features?guid=3Dguid-5ab574db-efdf-41ec= -b6b3-0f5e705a0e51&lang=3Den-us

=============================================================== From: Dave Brockman ------------------------------------------------------ f I have worked on a ton of headless Unix boxes via serial console, but not SOL involved. We used these mysterious black boxes (well, mine was black, I liked the Optimas) called modems. :) Cheers, -Dave

=============================================================== From: Ed King ------------------------------------------------------ I've been SOL for years but currently my luck is on the up 'n up! SOL, or Serial Over LAN, allows you to operate a server headless, and have the console redirect to the IPMI system, and give you a console. If you've ever operated a UNIX system headless over a serial console, you know what this is.

=============================================================== From: "Alex Smith (K4RNT)" ------------------------------------------------------ Likewise. I've dealt with serial console lots, over Telnet, serial and IPMI. I prefer Rackable Systems Phantom. " 'With the first link, the chain is forged. The first speech censured, the first thought forbidden, the first freedom denied, chains us all irrevocably.' Those words were uttered by Judge Aaron Satie as wisdom and warning... The first time any man's freedom is trodden on, we=E2=80=99re al= l damaged." - Jean-Luc Picard, quoting Judge Aaron Satie, Star Trek: TNG episode "The Drumhead" - Alex Smith - Kent, Washington (metropolitan Seattle area) f