I have equalogic box ps 4100 it show error message
Cache to module power failure what to do as these SAN BOX Is under production which runs on 24/7
I have equalogic box ps 4100 it show error message
Cache to module power failure what to do as these SAN BOX Is under production which runs on 24/7
We are still on v7 FW. The arrays only have 18 more months until EOL. I see no reason to update to v8 or v9, as the new features add no value. Its been a while since there were any v7 updates, so I wondering if that's because v7 is really stable now with no critical updates, or do I have to go to v9 for any latest fixes?
Should I have to read the v8 and v9 change logs, or can I just track v7?
i'm new and i have a ps6000 , i dont know how support works but i think i havent it . i cant interact with the ps6000 storage using the serial port and putty. i tried to connecting on it using the act control unit but even if i wait for 30minutes it doesnt work , i tried to press enter but still doesnt work.
more info:
i used a null cable modem standard , putty setted with com1 at 9600 baud . equallogic has 2 control 7 unit and 2psu and 16x600gb sas disks.
thank u so much to everyone could help me.
Hello,
We have a PS 6610e and 5 HP DL 380 servers running VMware 6. I've been tasked with setting up the storage to the hosts and found that we do not have any dedicated switches to use for a SAN and that we'll need to use the existing switch (Cisco Nexus 2232). I'm trying to wrap my head around how to accomplish this. I know that as a "best practice" it is better to have dedicated switches but having limited experience with this setup I'm having trouble convincing those with the $$.
From what I understand, in order to use the same network as our LAN we would need to use:
Data Center Bridging (which requires converged network adapters)?
Enable network I/O control?
I have found some information on setting this up but it is scattered - is there documentation somewhere for setting up iSCSI on a shared network? Or is this a really bad idea??
Thanks for any input/advice.
We're currently running two groups of EqualLogics (let's call the groups A and B for simplicity), with a PS4100 and four PS6100's in each. When we started, it made sense to have half our live volumes on one group and half on the other, with replication occurring between each group. Now that we've increased our group size and number of volumes, we're looking to move all of the live volumes onto one group and have the other group to hold the replicas, making the failover process a lot more simple.
I'm trying to move all the volumes from group A onto group B, and was hoping to make use of the replicas. All volumes are configured with a replication schedule from Group A to Group B, keeping a failback snapshot. The process I've tried so far is:-
1) Take a volume offline from the Windows server it's connected to
2) Create a final manual replica from group A to Group B so I have the latest data
3) Promote the replica set on Group B to a volume
4) Demote the volume on Group A to a replica set
5) Make the promotion on Group B permanent - during this process I'm keeping the volume name the same as the original, e.g. Live volume is called Test, replica set is called Test.1, then when I make the promotion permanent I call the volume Test
6) Demote the original volume on Group A to a replica set
That's all worked well so far, but I've hit a minor snag. I was hoping that I could use the demoted volume on Group A as a starting point for the new replication from Group B to Group A, but when I configure replication it creates a new replica set "Test.1" and the original volume, now a replica set is still called "Test" and appears to be orphaned.
Is it possible to to link the newly promoted volume to the original volume back on Group A, or am I going to have to just setup a new replication schedule and shift terabytes of data on the first pass for each volume? Or is there a better, more streamlined way of doing this?
Thanks,
Michael
Is it possible..? I haven't upgraded FW yet to see if the option opens up.. But when I try to remove from GUI it says I need to call support, running member delete in CLI gives me this:
ST-EQ> member delete st-eqex
Deleting an offline member may cause irreversible loss of data. Before the
operation can proceed, contact Dell Customer Support to ensure that this is
an appropriate action. If Dell Customer Support believes that this is the
correct path, they will provide a response to match the following
firmware version and challenge number.
To cancel this operation, press <Enter> without entering a response.
Firmware version: V6.0.2
Challenge: 2367
Response:
This is old equipment being re-purposed. It does not have a current service contract.
I added it to my group incorrectly- swiftly set it to defaults rather than removing via GUI-- or correcting my slip up.. I mean it literally impacts nothing, just an eye sore in group manager.. I re-added it with a different name and everything is up and running smooth.
What happens if an update fails while running Dell Storage Update Mgr to update a series? Does it roll back to the previous firmware on the failed controller and then stop?
I'm running fw 8.1.4 on 3 PS6210 series and I'm ready to upgrade to 8.1.8 since it's been out for awhile. I'll be trying out DSUM for the first time, and I have 2 groups - 1 with a 6210xs, and another with 2 6210x members. In the past, I've used Grp Mgr and done one at a time, so a fail would only affect one member.
I've only had one failure in the past 10 years, and it was easily resolved, but I'd still like to know as much as I can going in. Thanks.
Hello Support
I have some errors that are pointing in my Server Application log :
CHAP is not supported for MPIO sessions by adapter at IP. Please add an access control record for target specifying the Initiator IQN string
I dont know what this mean since i'm not a SAN guy . but every thing seems to be working fine , volumes are attached but i noticed these last days some crash of the ISCSI initiator when the server restart could this be related
Hello,
I have one standard 70G volume (not Thin Provisioning) with only has 300MB free space left. Will it impact the performance with this little free space left or cause volume offline if volume grows?
Thanks,
JY
Hi,
two weeks ago a battery in one of the two members in our PS6100 group went bad and we were told by the dell support to swap out the whole controller (seems odd to me but this is not the point of this post).
Because we know from previous incidents that it would be a bad idea to initiate a failover with more than 250 connections to this member. It takes minutes for the secondary controller to pick up the connections and by this time some of our VMs which are served by this group would have died from load.
So we added a PS6000 to the group and through volume select x bind node3 we bound the volumes to either node1 or node3 (node2 is the bad one).
Now my two questions:
Is there any way to see the bind operations progress (the relocation) or the status? show members only shows me the current distribution but neither if this volume is actually be moving or what the desired state would be.
Second: Today I noticed an entry in the event log stating "volume xxx is no longer bound to node1" followed by this volume moving back to node2. I did not unbind the volume and it seems that I am not able to stop the relocation. Anyone with an idea what happens here?
Bonus question:
we have around 500 volumes in the group and around 450 connections at a given time. The performance of the interface (Java GUI and CLI) is very unstable. For example: bringing up the "show volumes" results sometimes is as fast as 30 seconds sometimes it takes more than 4 minutes. Is this the expected behaviour? I don't think so. So what happens here?
Thanks for any answer or hint.
I'm currently running ESXi 5.5 with two data stores on thick provisioned eqallogic volumes with firmware 8.1.8. After my latest firmware upgrade the alerts for low pool space have started to tell me that I have less than 3% free space in my group. I have adequate free space within the VMW datastores to accommodate up to 5% free.
My questions are:
a) Is it worth changing these to thin provisioned and reclaiming the free space in order to maintain 5% pool free space?
b) If I make this change on the Equallogic volumes to thin provisioned, am I risking any data loss on the VMW datastores?
c) In order to make this change, is it as simple as checking "Enable Thin Provisioning" on group manager, reconfiguring the allocations and then running the unmap command to reclaim the free space?
Thanks!
Hi,
One of my member PS6210 with firmware v.7.1.5 shows large number of packet errors message. Eth0 shows 1,208,030 errors and Eth1 shows 43,312. Do I need to concern about this and how can I troubleshoot it?
Thanks,
Hi,
I have a 'battery failed' error in the secondary controller of a PS6100 that's out of warranty. In the event log you can see a more detailed error 'C2F power module voltage is too high'. Does anybody knows what this error means? Can the controller still works with this error?
thanks.
Hello!
Is there an indication in the official documentation for storage systems of the need to use only certain hard disks?
The hard disk has failed and it is necessary to justify the purchase of a new disk...
Dell EcualLogic PS6000xv
Like the subject says, what is the current recommendation for large receive offload? Enabled or disabled?
Per TR1091 it seems the recommendation to use LRO was removed in the last update and there's no mention of it.
In the MEM user guide though, page 15 says Dell recommends disabling it and it's part of the bestpratices flag.
If I check the Compellent documents, I don't see any mention of it really.
So is LRO still supposed to be disabled and is it a EQL only recommendation?
And speaking of MEM, if I only have 1 member per group, is there any advantage to installing MEM or if I set the delayed ack, and iops/RR stuff manually it accomplishes the same thing?
Thanks
Hi All
I have two nodes for EQL 6210XS and there are bind a name "EQL Pool". Total capacity almost 36TB.
Now when i create a volume 8TB for the Hyper-v cluster shared volume and create some VM in CSV .All Hyper-V HOST connection to the EQL Volume via ISCSI MPIO.
In the hyper-v failover cluster manager show our CSV have 500GB free space , but the EQL Group Manager show have 5TB free space.
It seems that Hyper-V HOST cann't correct identification the volume free space after deduplication.
Is there any suggestion to exclude this problem?
Our security audit detected that SAN HQ was running some type of web interface on port 443/https. I tracked the culprit down to WebServiceIntegration.dll.config which is located at C:\Program Files\EqualLogic\SAN HQ\WebServiceIntegration.dll.config.
Is there any way to assign a certificate from our Certificate Authority server using OpenSSL or some other means, or the ability to turn this off if it can be?
The problem was present in both 3.2.0 and 3.2.1 version of SAN HQ.
Hi
I have a PS4100XV with RAID5 configuration and need to format it again to RAID10
Is there a way to do this without loosing current configuration?
M.Thanks
Tasos.
We have old PS6010XV out of support with
- 16 x 15k 600GB SAS Hard disk
- 2 x 70-0300 (type 10) Controllers
we use it for testing at the lab and we need to increase its capacity so i am asking if we can attach the following type for the new hard disks
- 16 x 3TB NL SAS 7.2k
Does PS6510 controller support sfp swap on the controller? We do get some CRC error on the network port where one of the interface connects to. We did try to swap cable, sfp on switch port and connecting to new switch port. Instead of replacing the whole controller, can we just swap sfp