Connecting IBM v7000 to Nexus 5548
20-Sep-2012 16:51 (in response to feisalb)
IBM V7000 with Nexus 5548UP and Nexus 4000 Design/implemetation guide
Hi Guys
I have a question in regards to connecting IBM v7000 directly to Nexus5548.
CAN WE DO THIS?
Our current setup is IBM v7000 -> MDS 9124 -> Nexus 5548.
But our MDS 9124 are out of warranty now and we need to take them out of production. And only way we can do this is if we connect our IBM v7000 fibre ports directly to our Nexus 5548.
Can someone please point me to the right direction any knowledge base articles etc.
Thanks Heaps
Sid
Dear prkrishn
I am working on the Data Center Solution between two Data Center, details underneath
DC 1 Site
1. 2 x DC Core Switch (Nexus 7009) - will be used for Servers, Application, IBM V7000, Database, etc.
2. 2 x Network Core Switch (Cisco 6509) - Handle Campus and Inter Building connection.
3. IBM V7000 (SAN)
DC 2 Site
1. 2 x DC Core Switch (Nexus 7009) - will be used for Servers, Application, IBM V7000, Database, etc.
2. 2 x Network Core Switch (Cisco 6509) - Handle Campus and Inter Building connection.
3. IBM V7000 (SAN)
With the above mention setup, can I configure FCIP between DC1 & DC2 using Nexus 7009? or I need FCIP capable Switch such as IBM SAN Switch (SAN06-BR), I was wondering if I can configure FCIP on Nexus 7009 DC Switch.
Hoping for your kind response at earliest.
Kind Regards,
Arnold
Similar Messages
-
Hi,
We want to upgrade our pair of Nexus 5548 to the new NX-OS 5.1(3)N2(1a) from the 5.0(3)N1(1c) version. We would like to use the ISSU procedure. But when we execute the command "show spannig-tree issu-impact" we get the following output:
No Active Topology change Found!
Criteria 1 PASSED !!
No Ports with BA Enabled Found!
Criteria 2 PASSED!!
List of all the Non-Edge Ports
Port VLAN Role Sts Tree Type Instance
Ethernet2/8 1803 Desg FWD PVRST 1803
The 1803 vlan is only used for the peer-keepalive link and it only exists on these two Nexus. So one of the two Nexus needs to be the STP root. That makes the ports on that vlan to be in designated-forwarding state, which is not supported for the ISSU:
sh run int e2/8
!Command: show running-config interface Ethernet2/8
!Time: Fri Jun 8 17:04:33 2012
version 5.0(3)N1(1c)
interface Ethernet2/8
switchport access vlan 1803
speed 1000
That is the only port that belongs to that VLAN and it is directly connected to the other Nexus 5548. So the only way we see to avoid this port of being in designated-forwarding state is to apply the "no spanning-tree vlan 1803" command. Would it be a problem?
We can imagine that introducing the "spanning-tree port type edge" should not be a good idea, shouldn´t it?
Thank you very much for your help!
JosuHi,
Reviewing all the prerequisites for the ISSU, we have seen the following:
SSU and Layer 3
Cisco Nexus 5500 Platform switches support Layer 3 functionality. However, the system cannot be upgraded with the ISSU process (non disruptive upgrade) when Layer 3 is enabled. It is required to unconfigure all Layer 3 features to be able to upgrade in a non disruptive way with an ISSU.
We have the interface-vlan feature enabled. But it is only used for two interfaces:
- interface-vlan 510 --> It is only used in order connect to the switch
- interface-vlan 1803 --> The one used for the keepalive
We could administratevely shutdown the interface-vlan 510. But we could not do so with the interface-vlan 1803, since it is used for the keepalive. If we execute "no feature interface-vlan", would the keepalive stop working?
When we execute "sh install all impact ..." command the Nexus does not tell anything about this feature. Is really recommended to disable it? Is it needed for the ISSU procedure?
Thank you very much in advance!!
JOSU -
C220 M4 w/ VIC1225 and Nexus 5548
I have four UCS C220 M4 servers that each have a VIC 1225 and are connected up to a Nexus 5548. I have set them to trunk ports and have set the appropriate VLANs in ESX and the portgroups. I can ping the gateway and other hosts on the network, but for some reason I can't ping any of the other C220 M4 servers that have a VIC1225. All of the C220's can ping other devices on the network, but not each other. Is there some setting I need to modify on the VIC or switch? Currently the VIC is running as Classical Ethernet mode.
There is no special configuration to make connectivity work in the rack servers, I assume these servers are in standalone mode and not integrated with UCSM.
What L3 device is doing your Inter-Vlan routing? Is it the N5K these servers are connected to or there is another device doing this job? If it is the N5K, are all the servers in the same network segment? I mean, is there a single SVI or multiple? can your Nexus 5K ping each of the servers?
-Kenny -
Servers connected to Nexus 5548 only getting 200 Mbps of throughput
Servers connected to NExus 5k were only getting 100 Mbps of throughput. So I disabled Flow control recieve on all the ports. After this we are getting 200 Mbps of speed. The servers are connected throuh 10 gig port. Could you guys please suggest why the throguhput is still low? Atleast we should get 1Gbps of thrpoughput.
Hi Adam,
I think we probably need a little more information to go on. Can you answer the following?
What type of servers and NICs?
What OS are you running on the servers?
What cables do you have from the servers to the switch?
Are the two servers in the same subnet or is the traffic between them routed?
If routed, is that in the Nexus 5548 or some other router?
How are you testing throughput?
Presumably you're not seeing any errors on the switch ports that the servers are connected to?
Regards -
Connecting NEXUS 5548 1gig interface to 100mbps
Hi,
I have a 5548 that I need to connect to a firewall that supports 100 Mbps only.
Can I configure interface speed on Nexus 5548 interface (GLC-T) to 100 Mbps inorder to connect it to the firewall??
Regards,
SabihHi Sabih,
The interfaces on a Nexus 5548 can NOT be configured as 100 Mbps.
If you wish to connect to the firewall via a 100 Mbps connection, you will need to make use of a Fabric Extender (Nexus 2000) that supports 100 Mbps.
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/data_sheet_c78-618603.html
Thanks,
Michael -
Connectivity Issue between Nexus 5548 to VNX 5300
Hi All,
I am doing a lab setup where i want to connect Nexus 5548 UP to VNX 5300 storage directly. The physical connectivity is established between switch and storage but On Nexus the status of the port shows "linkFailure". I tried matching the port mode (like Auto, F) and speed but the port always shows "linkFailure".
The connectivity from Nexus to VNX is FC.
Anyone can suggest the root cause or any troubleshooting steps.
Regards,
AbhilashLinkFailure might be a GUI status.
show interface fcx/y might say,
Link failure or not connected
The physical layer link is not operational.
This means the switch is not detecting light, so the phyiscal layer is the cable and lasers (sfp's, HBAs or whatever adapter the VNX uses). It could mean you need to turn the interfaace up from the vnx side. -
Hi,
I have 2 SUN 4270 M2 servers connected with Nexus 5548 switch over 10Gb fiber card. I am getting performance of just 60 MB per second while transfer of 5Gb file across 2 servers. The similar speed i use to get on 1Gb network also. Please suggest how to improve the tranfer speed. On servers, ports ET4 and ETH5 are bonded in bond0 with mode=1. The server envrionment will be used for OVS 2.2.2.
Below are the details of network configuration on server. I quick help will be highly appriciated--
[root@host1 network-scripts]# ifconfig eth4
eth4 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:5648589 errors:215 dropped:0 overruns:0 frame:215
TX packets:3741680 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2492781394 (2.3 GiB) TX bytes:3911207623 (3.6 GiB)
[root@host1 network-scripts]# ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:52961 errors:215 dropped:0 overruns:0 frame:215
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3916644 (3.7 MiB) TX bytes:0 (0.0 b)
[root@host1 network-scripts]# ethtool eth4
Settings for eth4:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host1 network-scripts]# ethtool eth5
Settings for eth5:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host1 network-scripts]#
[root@host1 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth4
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth4
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0e:22:4c
Slave Interface: eth5
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0e:22:4d
[root@host1 network-scripts]# modinfo ixgbe | grep ver
filename: /lib/modules/2.6.18-128.2.1.4.44.el5xen/kernel/drivers/net/ixgbe/ixgbe.ko
version: 3.9.17-NAPI
description: Intel(R) 10 Gigabit PCI Express Network Driver
srcversion: 31C6EB13C4FA6749DF3BDF5
vermagic: 2.6.18-128.2.1.4.44.el5xen SMP mod_unload Xen 686 REGPARM 4KSTACKS gcc-4.1
[root@host1 network-scripts]#brctl show
bridge name bridge id STP enabled interfaces
vlan301 8000.90e2ba0e224c no bond0.301
vlan302 8000.90e2ba0e224c no vif1.0
bond0.302
vlan303 8000.90e2ba0e224c no bond0.303
vlan304 8000.90e2ba0e224c no bond0.304
[root@host2 test]# ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0F:C3:15
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:4416730 errors:215 dropped:0 overruns:0 frame:215
TX packets:2617152 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:190977431 (182.1 MiB) TX bytes:3114347186 (2.9 GiB)
[root@host2 network-scripts]# ifconfig eth4
eth4 Link encap:Ethernet HWaddr 90:E2:BA:0F:C3:15
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:28616 errors:3 dropped:0 overruns:0 frame:3
TX packets:424 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4982317 (4.7 MiB) TX bytes:80029 (78.1 KiB)
[root@host2 test]#
[root@host2 network-scripts]# ethtool eth4
Settings for eth4:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host2 test]# ethtool eth5
Settings for eth5:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host2 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth5
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth5
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0f:c3:14
Slave Interface: eth4
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0f:c3:15
[root@host2 network-scripts]# modinfo ixgbe | grep ver
filename: /lib/modules/2.6.18-128.2.1.4.44.el5xen/kernel/drivers/net/ixgbe/ixgbe.ko
version: 3.9.17-NAPI
description: Intel(R) 10 Gigabit PCI Express Network Driver
srcversion: 31C6EB13C4FA6749DF3BDF5
vermagic: 2.6.18-128.2.1.4.44.el5xen SMP mod_unload Xen 686 REGPARM 4KSTACKS gcc-4.1
[root@host2 network-scripts]#brctl show
bridge name bridge id STP enabled interfaces
vlan301 8000.90e2ba0fc315 no bond0.301
vlan302 8000.90e2ba0fc315 no bond0.302
vlan303 8000.90e2ba0fc315 no bond0.303
vlan304 8000.90e2ba0fc315 no vif1.0
bond0.304
Thanks....
JayHi,
Thanks for reply..but the RX errors count is keep on increasing and the transfer speed between 2 servers are max 60MB/ps on 10GB FC card. Even on storage also, i am getting the same speed when i try to transfer data from server to storage on 10GB FC card. Servers and storage are connected through Nexus 5548 switch.
#ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:21187303 errors:1330 dropped:0 overruns:0 frame:1330
TX packets:17805543 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:624978785 (596.0 MiB) TX bytes:2897603160 (2.6 GiB)
JP -
Connecting IBM B22 FEXes to Nexus 7000
Hello everybody,
Could someone give an definite answer, better to based on personal experience - is it possible to attach B22 FEX going to the IBM chassis to Nexus 7000 via F2 series 48-port 10G card?
NXOS is currently running 6.1.5a but we plan to upgrade to 6.2.8 or 6.2.12 once we get new 4GB DIMMs for SUP1 modules.
Thanks.In my case, I was getting this error on a interface of a Nexus C6001 with the FET-10G transceiver. I was able to clear it up by temorarily replacing and configuring a slower GLC-T which worked as expected. I then removed all the settings an got the FET-10G to link.
-
FCoE with Cisco Nexus 5548 switches and VMware ESXi 4.1
Can someone share with me what needs to be setup on the Cisco Nexus side to work with VMware in the following scenario?
Two servers with two cards dual port FCoE cards with two ports connected to two Nexus 5548 switches that are clusterd together. We want to team the ports together on the VMware side using IP Hash so what should be done on the cisco side for this to work?
Thanks...Andres,
The Cisco Road Map for the 5010 and 5020 doesn't include extending the current total (12) FEX capabities. The 5548 and 5596 will support more (16) per 55xxk, and with the 7K will support upto 32 FEX's.
Documentation has been spotty on this subject, because the term 5k indicates that all 5000 series switches will support extended FEX's which is not the case only the 55xx will support more than 12 FEX. Maybe in the future the terminology for the 5k series should be term 5000 series and 5500 series Nexus, there are several differences and advancements between the two series. -
Telephony Issues on Nexus 5548
Dear Viewers,
I have Nexus 5548 devices in one of my client data centers and i have one 3750 switch to which all of these Avaya voice servers connect.
The 3750 switch was initially connected through a L2 Link to a 6509 catalyst switch and the telephony applications were working correctly.
The problem arises when i move this 3750 layer 2 link to a Nexus 5548 (OS version 5.1(3)N1 switch. All telephony calls coming from the outside (External calls) are not working as required but the internal calls work as usual.
What is odd is that when i migrate this L2 link back to the 6509 switch, all works as usual. This is just a layer 2 connection and i am wondering why this is not possible.
The vlan is accepted on all relevant trunks. I also deactivated igmp snooping on this voice vlan on the Nexus 5548 thinking it would help but in vain.
Any ideas and suggestions are welcome.
regards.
AlainThis is my radius config...... on a 5K
radius-server timeout 7
radius-server host 10.28.42.20 key 7 "Password" auth-port 1645 acct-port 1646 authentication accounting
radius-server host 10.28.42.21 key 7 "Password" auth-port 1645 acct-port 1646 authentication accounting
aaa group server radius Radius-Servers
server 10.28.42.20
server 10.28.42.21
aaa authentication login default group Radius-Servers
ip radius source-interface Vlan1
aaa authentication login default fallback error local
And it is currently working. On the radius server i also had to do this to make the users admins once logged in:
https://supportforums.cisco.com/document/137181/nexus-integration-admin-access-free-radius -
Hi All,
I have issues with Nexus 5548 ports 1000Gb ports. They go down after sometime with the error " Link not connected " while the links are connected. When i move the connections to other ports they work but after a while the go down again with the same error. I can confirm that is currently down was working and its currently connected. Has anyone seen the error before?
Kindly see the output from the interface thats currently down below:
VNX_NEXUS# sho interface eth1/11
Ethernet1/11 is down (Link not connected)
Hardware: 1000/10000 Ethernet, address: 002a.6a71.1f92 (bia 002a.6a71.1f92)
Description: Link_to_EMC_RPA3
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA
Port mode is access
auto-duplex, 1000 Mb/s, media type is 10G
Beacon is turned off
Input flow-control is off, output flow-control is off
Rate mode is dedicated
Switchport monitor is off
EtherType is 0x8100
Last link flapped 2d23h
Last clearing of "show interface" counters 5w6d
30 seconds input rate 0 bits/sec, 0 packets/sec
30 seconds output rate 0 bits/sec, 0 packets/sec
Load-Interval #2: 5 minute (300 seconds)
input rate 0 bps, 0 pps; output rate 0 bps, 0 pps
RX
43384443 unicast packets 30 multicast packets 1496 broadcast packets
43385969 input packets 7837558138 bytes
0 jumbo packets 0 storm suppression bytes
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 0 input discard
0 Rx pause
TX
56587244 unicast packets 3937125 multicast packets 1487058 broadcast packets
62011427 output packets 14141808286 bytes
0 jumbo packets
0 output errors 0 collision 0 deferred 0 late collision
0 lost carrier 0 no carrier 0 babble 0 output discard
0 Tx pause
18 interface resetsHi Leo,
What confuses me is that the connection was working. I used the correct cable and the port type as well. How do i resolve this? -
Nexus 5548 not responding to snmp
I've got a Nexus 5548 running 6.0(2)N2(3). It does not use the mgmt interface or management vrf. It's using a vlan interface for all my management access.
I have a simple snmp config set up:
snmp-server community mystring
My SNMP server is directly connected (no firewalls, no acls). I can ping my nexus from the SNMP host, but can't get SNMP replies.
I've done an SNMP debug, nothing happens when I run an snmpwalk. I also checked show snmp, and it's showing no SNMP input packets.
Could this have something to do with trying to use the management vrf? Or something simple I'm missing?
ThanksHa wow -- "sh run snmp" pointed me to the problem. There was a command:
no snmp-server protocol enable
That must be a default, I never entered that. Anyway a 'snmp-server protocol enable' fixed it. I should have caught that. Although an hour with TAC also didn't notice it hehe.
Thanks! -
Creating VPC's between 2 NExus 5548's
I am just about to setup my 2 Nexus 5548's in the
lab for VPc. I see that I have to setup an etherchannel using 2 10 gig ports for the Peer link and
I also have to setup a Peer keepalive.
Do I also have to burn a 10 gig port on both switches for this keepalive ?
I dont have any 1 gig ports on these switches.
Any help would be appreciated.
Cheers
DaveThe best way is to connect the mgmt0 ports to a CORE switch for out of band management. Also, when you configure the vrf context management, make sure you specify a route to the management subnet via vrf management.
-
NX-OS firmware Upgradation in Nexus 5548 with Enhanced vPC with Dual Active FEX
Hi All,
Please tell me how to do "NX-OS firmware Upgradation in Nexus 5548 with Enhanced vPC with Dual Active FEX" without downtime for FEX.
The Server are connected to FEX.
Attached the diagram.Hi,
If the 5500s are layer-2 with vPC running between them than you can use ISSU to upgade.
here is doc to follow:
ISSU Support for vPC Topologies
An ISSU is completely supported when two switches are paired in a vPC configuration. In a vPC configuration, one switch functions as a primary switch and the other functions as a secondary switch .They both run the complete switching control plane, but coordinate forwarding decisions to have optimal forwarding to devices at the other end of the vPC. Additionally, the two devices appear as a single device that supports EtherChannel (static and 802.3ad) and provide simultaneously data forwarding services to that device.
While upgrading devices in a vPC topology,you should start with the switch that is the primary switch. The vPC secondary device should be upgraded after the ISSU process completes successfully on the primary device. The two vPC devices continue their control plane communication during the entire ISSU process (except when the ISSU process resets the CPU of the switch being upgraded).
This example shows how to determine the vPC operational role of the switch:
link:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/upgrade/513_N1_1/n5k_upgrade_downgrade_513.html
HTH -
FCoE using Brocade cards CNA1020 and Cisco Nexus 5548 switches
All,
I have the following configuration and problem that I am not sure how to fix:
I have three Dell R910 servers with 1TB of memory and each has two brocade 1020 CNA cards dual port. I am using distributed switches for the VM network and a second distributed switch for VMotion. I have two of the 10G ports configured in each distributed switch using IP Hash. The management network is configured using a standard switch with two 1G ports.
The Nexus configuration is we have two nexus 5548 switch connected together with a trunk. We have two VPC's configured to each ESX hosts consisting of two 10gig ports in each VPC with one port going to each switch. The VPC is configured for static LAG.
What I am seeing is that after a few hours the virtual machines will not be accessible via network anymore. So if you ping the VM it will not work and if you get on the console of the VM then ping the gateway then nothing as well but if you try to ping another virtual machine on the same host on the same VLAN then it will work so traffic is going through the ESX backplane. If I reboot the ESX host then things will work again for another few hours or so then the problem repeats.
The version of vSphere I am using is ESXi4.1
Please assist I am stuck.
Thanks...Here is the link for Nexus and Brocade interoperability Matrix
http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/interoperability/matrix/Matrix7.html#wp313498
usually this table would show those models those have been tested and verified
However I do not see Brocade 5300 listed in the table . It could be, interoperability may have not been tested by both vendors perticularly to 5300 type Model.
Maybe you are looking for
-
ICal print preview is not formatted properly for printing
Calender printed fine last month. This month I get this. Naturally, it looks the same when printed. Running version 10.9.2 on iMac. Any ideas what the problem is?
-
Mini Optical audio cable 3.5mm or 5.0mm ??
I want to connect new 13" MBP to my audio receiver to play some digital audio out (5.1 DD/DTS). I have found a number of toslink-to-mini optical cables online. Question - mini optical end should be 3.5 mm? Can 5.0 mm work as well? http://www.monopric
-
Yellow status in front of idoc
What means yellow status in front of idoc . (Green means it is processed, Red means it is in error). Where to find reason it is yellow
-
Moving Photoshop Elements from Windows 7 to Windows 8.1
Using Elements 12 Organizer works fine when viewing a catalogue from an external drive file in Win7 but viewing the same file in Win8.1 shows only about 70% of the images the others are greyed out. I have tried the restore feature with similar result
-
Problem with Workbook transport
I have a Qxx query and the workbook newly built on a multiprovider, which I saved to the CTS xxx1 and transported from Dev to the testing, the transport was successful when I see the CTS Logs but I can see only the query in testing not the workbook.