Solaris containers and Fiber connection

Hi guys,
I have sucsessfully migrated solaris 8 system to solaris 10 containers. and also mounted some RAID enabled partition on non global zone without any issue. now i need to connect a fiber channel to my host OS (solaris 10) and add the SAN partition to my container (solaris 8). if i run cfgadm command it says cfgadm: Library error: Device library initialize failed: Invalid argument.
there are some other ways to mount the SAN (mount the SAN on host and export the partition to container). but i need the above method . is it possible to mount the SAN partition from the non-global zone ? if so how should i do it .
thanks,
Tharanga

As far as i know you can't give the non-global zones the required permissions which is needed to mount a FC LUN directly into the non-global zone.
I think you have to mount the LUN in the global zones, and then loop-back-mount them into the non-global zone.
.7/M.

Similar Messages

  • SUN Solaris: Containers and Virtualization

    Hi Everyone,
    Just starting on a new project which will deployed on SUN Mxxxx servers.
    We have opted to virtualize the systems, does anyone have any experience on Solaris Virtualization, containers, etc?
    Any white papers, how to guide or experience would be greatly appreciated.
    Kind Regards
    Michael

    We make use of containers/zones heavily (together with ZFS as filesystem) and it works like a charm. We also use on one box Solaris Cluster together with containers.
    Check the following notes:
    Note 1246022 - Support for SAP applications in Solaris Zones
    Note 870652 - Installation of SAP in a Solaris Zone
    Note 724713 - parameter settings for Solaris 10
    What database will be used?
    Markus

  • Solaris 8 and dialup connection

    I have a SunBlade100 running Solaris8 with a dialup connection to the internet using a 56k external modem. The connection has been fine until several days ago when the dialup script started ending at
    "call cleanup (0)"
    with the following error message:
    "10:08:43 process_ppp_msg: PPP_ERROR_IND maximum number of configure requests exceeded".
    The following Q&A is from docs.sun.com:
    Problem: Cannot establish PPP link. Operation fails with the status message:
    "PPP error on ip_interface: Maximum number of
    configure requests exceeded"
    Solution: PPP Configure-request frames are generated to start the link
    establishment phase. After a certain number of frames (defined by the keyword lcp_max_restart in the file ppp.conf) are generated without a valid response, the client assumes that the remote server is unreachable. This may indicate one of the following:
    There is a problem with the physical connection between the two hosts. Check the cable to your modem.
    PPP is not running on the remote host. Check that PPP is configured and started at both ends of the link.
    The link establishment phase is not completed, because the configuration negotiation does not converge. Check for configuration problems.
    If you are trying to establish a link over a long-delay network, such as a satellite connection, or over a congested line, the maximum number of configure requests may be exceeded before the negotiation is completed. Increase the maximum number of configure requests sent (lcp_max_restart) and the time between retries (lcp_restart_timer).
    My question is: where do you go to edit the variables "lcp_max_restart " and "lcp_restart_timer" ?
    Thanks in advance for any info!

    Thanks for the reply.
    I did a file search for ppp.conf as it doesn't exist at /etc/opt/SUNWconn/ppp/.
    It doesn't seem to exist anywhere on my computer!
    Is ppp.conf the normal location for the variables "lcp_max_restart " and "lcp_restart_timer"?

  • Solaris 8 Containers and x86 compatability

    My client relies on a program for the meat and potatoes of our processing. This program currently only works on Solaris 8. We are using Sun Blade 2500 workstations. However we are moving offices and to build up the new lab, the client wants to get away from the Sun workstations and move towards Linux or Windows machines. He is however willing to migrate to Solaris 10 provided the the program that we depend on is still usable. Enter my solution: Solaris 8 containers.
    The documentation for containers was not very clear. I am interpreting two different messages from the Sun website. One says that Solaris8 Containers will work on SPARC systems and/or Systems running Solaris 10 10/08 or later. Other documentation says Solaris Containers are SPARC system dependent.
    I installed Solaris 10 on a HP x86 based desktop via VirtualBox. I installed all the SUNWs8brandX patches that came with the containers bundle and followed the install setup directions. I cannot verify as I am getting a "Cannot Execute Brand Specific Error" message.
    Does anyone else have a suggestion as to how I would get a Solaris 8 dependent program running on a Linux or Windows based machine? Is my solution viable?

    agquint wrote:
    What I mean was the post below your first one. But you correctly interpreted it in your next post. So "Solaris 8 containers" only works for SPARC. I don't get why Sun would only have containers support for SPARC systems. That is in total contradiction to why I would think they would develop containers in the first place: backwards compatibility to support programs that aren't yet migrated to the current version of Solaris regardless of whether they are x86 or SPARC. They have a solution but it only works at %50? hmph.There is probably just very little demand for it. During the Solaris8/Solaris9 days, Sun was killing off the x86 line. There is very little installed base of Solaris 8 x86 compared to Solaris 8 SPARC. (It wouldn't help you out, it appears, since you have SPARC binaries).
    Darren

  • I have internet optic fiber connection and I'm trying to configure it with my Airport Express but doesn't works, appears that I have IP and DNS, as I'm a computers dummy , who could help me to configuration it, please?

    I have internet optic fiber connection and I'm trying to configure it with my Airport Express but doesn't works, appears that I have IP and DNS, as I'm a computers dummy , who could help me to configuration it, please?

    You're welcome.
    Voicemail is left at your carrier's server. That will continue to work unless you report your iPhone as lost or stolen with your carrier.
    You may never find it again and you can't if the iPhone remains offline or out of service which means the iPhone is powered off or doesn't have cellular reception.

  • 3750 switch and HP switch fiber connectivity issue

    I have connected 2 Cisco 3750 switches "WS-C3750-48TS-S" with the LC to LC Duplex Single Mode fiber cable. Both the switches are communicating with each other.
    As i have checked in the cisco document that GLC-SX-MM support only Multimode fiber cable. So i am surprised how does it support single mode fiber optic cable.
    Can anyone tell me the reason for supporting single mode fiber optic cable?
    But when i connect HP 4208 switch with the same Cisco 3750 the interface is showing up but the input packets are 0 and output packets are increasing on both ends.
    I have checked the SFP's and fiber cable are compatible with HP. 
    Can anybody suggest me what can be the possible cause of this?
    Regards,
    Mukesh Kumar
    Network Engineer
    Spooster IT Services

    I tried to troubleshoot this issue by using some show commands.
    As i have checked that there are some specific commands to check SFP transreceiver as given below:
    show hw-module subslot slot/subslottransceiver port idprom.
    show interfaces {{[int_name] transceiver {[detail]} | {transceiver [module mod] | detail [module mod]}}
    But these commands support only those transreceivers which support DOM. GLC-SX-MM doesn't support DOM.
    Can anyone tell me that are there some troubleshoting commands to solve out the issue?
    Regards,
    Mukesh Kumar
    Network Engineer
    Spooster IT Services

  • Filtering TCP connections on Solaris 9 and 10

    I need to develop a driver to filter TCP packets on Solaris 9 and 10. My understanding is that I need to develop a STREAMS module to do this. Searching the forums and reading the docs it does NOT look like a STREAMS module can be inserted between IP and TCP in the stack. Instead it looks like I have to insert my module between the NIC driver and the IP stack. Is that correct? Is there an alternative interface (hook) in the IP stack that would allow me access to TCP packets?
    Assuming STREAMS is the way to go, can anyone point me to some example code to get a jump start on this project?
    Thanks in advance for your time and feedback
    Tom Fortmann
    Xcape Solutions

    Hello Aerosmith,
    On Solaris 9, do you see any window with a progress bar at Adobe Reader startup saying "Adding font folders to cache"? I assume you're launching Adobe Reader without any document from commandline -- approx how many seconds does it take for Adobe Reader to launch?
    Thanks,
    Gaurav

  • Solaris containers

    Hello,
    I have been using sparc workstations :SUNW,Ultra-5_10
    Total 4 such machines. Users use it for mpich programming and all run solaris 8
    but I always have hard time maintaining these machines. Authentication for these machines work from solaris 10 using NIS and there are nfs mounts on each work station.
    I am wondering if it would be possible to move these 4 workstations on solaris 10 on V240 by using solaris containers.
    V240 is having 8GB of memory. I am wondering if this is possible?
    Please guide me , thanks

    Wow. That usually makes things more centralized and easier to maintain. What is your main concern as far as maintenance goes?My concern is , i do not want to see " can't open boot device" "disk error" kind of messages. I do not want these work stations shock me suddenly going offline due to any hardware failure. it has been happening regularly with these workstations such type of things..every time I have to tak out HDD and try it in different machines, if that does not help then mirror another disk put it in these workstations...my space in the room going waste..these machines need keyboard connected to boot them..i do not have spare ones if something goes wrong with existing ones...more important is system is going down anytime causing loss of work to logged in users...
    always these systems go out of swap space, there is limited disk space, /var gets filled, can't patch them, some days back , it was not hard disk issue but I had to move hard disk to other machine , looks like bad memory on original machine...so such issues...
    Probably. How do you think that would make them simpler to maintain?I will not have to worry for all above I guess. basically i don't think i should wait for this hardware to die. Also buying 4 separate sparc machines would be costly. what better way is there than using containers ?
    Please guide
    Thanks for replying...

  • Troubleshooting Fiber Connection on a Catalyst 2960

    I am trying to test my fiber connectivity on a Catalyst 2960 before I deploy it. So what I thought I would do is connect it to another switch in my office with a open port for the fiber connection. The other switch is a Catalyst 3560G. Here are the port configurations:
    interface GigabitEthernet0/2
    switchport trunk allowed vlan 1,100
    switchport mode trunk
    macro description cisco-switch
    interface GigabitEthernet0/25
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 1,100
    switchport mode trunk
    The first one is the catalyst 2960 and the 3560G is the second.
    When you show the interface for each of these it shows that it recognizes the media but the line protocal and the GigbitEthernet Port is down.
    Any Ideas?

    Sorry... a fiber optic cable with a connector on each end.
    To aid in troubleshooting, many times we loopback the signal back to the originating device. An optical loopback is just connecting the transmit (Tx) to the receive (Rx).
    The multimode SFP/GBIC transceiver you are using will allow you to directly connect the Transmit and Receive ports without damage to the unit. This should provide you with a green link LED.
    If so, then you can reconnect your fibers and loopback (connect the two fibers together) at the far end of the fiber link (use an optical adapter) and see if you get a green LED.

  • Compiler directives to differetiate between solaris 9 and solaris 10

    i have an API which makes certain function calls which are different for solaris 9 and solaris 10. i cannot add my own defines, since that would mean that applications too would have to define them.
    does the Sun Workshop compiler provide any default directives to differentiate between solaris 9 and solaris 10? something like #ifdef __SunOS_5.10 or something like that?
    thanks.

    As explained in the C Users Guide, the compiler predefines several macros, one of which is represents the Solaris version number.
    The macro name is derived from the output of the commands
    uname -s and uname -r, starting with __, connected by _, and with dots replaced by _.
    Running on Solaris 9:
    % uname -s
    SunOS
    % uname -r
    5.9
    The defined macro name is __SunOS_5_9
    On Solaris 10 you get __SunOS_5_10
    __SunOS_5_9

  • Oracle Rac on Solaris Containers

    We are planning to implement Oracle 11G RAC on Solaris 10 using non-global zone. RAC is now officially certified on Solaris containers.
    But we are still investigating whether zone Cluster (Using Solaris Cluster) is the prerequisite to have Oracle RAC on Solaris non-global zone.
    This oracle document *(Supported Virtualization and Partitioning Technologies for Oracle Database and RAC Product Releases [ID 1173831.1])* stated that:
    _"*Oracle Solaris Containers are supported with Oracle RAC 10gR2 and 11gR1 (with Oracle Solaris Cluster on SPARC64). Solaris version 10 Update 7 or later (patches 141444-09, 143055-01, 142900-06, 143137-04 "md patch") with Oracle Solaris Cluster 3.3 and 3.2u2 patched to 126106-39 or later."*_
    Does it mean that Oracle RAC is supported on Solaris Container only with Solaris Cluster?
    In other word does Oracle RAC is supported on Solaris Non-Global zones without using Solaris Cluster?

    It was my understanding too. Do you have any source document where it is specifically noted that solaris cluster(Zone Cluster) is needed for RAC?
    Please check this document www.oracle.com/technetwork/articles/systems-hardware-architecture/deploying-rac-in-containers-168438.pdf, nowhere it is said that Solaris Cluster is needed.

  • Are there differences between Solaris 8 and Solaris 2.6 in the way web pages are displayed for Chinese and Korean text using UTF-8 settings?

    A user is running a web application on Solaris Release 2.6 with UTF-8. After the server was updated to Solaris 8, the application is experiencing problems with webpages containing Korean and Chinese text. The encoding is UTF-8. Has anyone experienced this problem? I checked the packages installed on Solaris 8 and they are consistent with what was installed on Solaris 2.6. Could this possibly be a patch-level problem? Thank you.

    Hi,
    It could be a problem with the necessary patche updation. refer the following FTP site for the Solaris pathces.
    connect through the FTP cleint to sunsolve.sun.com
    /pub/patches directory contains a lot of patches for solaris.
    Find and install the corresponding patch require to solve your problem.

  • SSH Differences between Solaris 9 and Solaris 10

    I use public key authentication when connecting via SSH but have noticed a difference between Solaris 9 and Solaris 10 and wondered if it's an environment setup issue. I keep my keys in $HOME/.ssh
    When connecting from Solaris 9 I can provide an identity file without a path regardless of the directory that I'm in e.g.
    ssh -i my_identity_file user@hostnameThe above works even if I'm not in the $HOME/.ssh directory. But when using the same from Solaris 10 I get the following error:
    Warning: Identity file my_identity_file does not exist.If I run the command from $HOME/.ssh on Solaris 10 it connects fine, and if I pass in the path like so it works fine:
    ssh -i $HOME/.ssh/my_identity_file user@hostnameIs there a setting specific to SSH somewhere as I can't see anything in my environment that's different between the two systems. There's certainly no entry in $PATH that points to $HOME/.ssh. How could I get SSH to work on Solaris 10 by just providing the identity file name and not the full path
    Regards
    Rich

    It's not explicitly defined in /etc/ssh/ssh_config, so I'm assuming it would be using the default which is ~/.ssh/id_dsa.
    But surely that's irrelevant if I'm using the -i switch to provide the identity file?
    Remember the problem here is that I have to provide a full path to the identity file, whereas before just the filename would do.
    Rich

  • Solaris Kernel and TCP/IP Tuning Parameters (Continued)

    This page describes some configuration optimizations for Solaris hosts running ATG Page Serving instances (application servers) that will increase server efficiency.
    Note that these changes are specific to Solaris systems running ATG application servers (+page serving+ instances). Do not use these on a web server or database server. Those systems require entirely different settings.
    h3. Solaris 10 Kernel
    Adjust /etc/system (parameters below) and reboot the system.
    set rlim_fd_cur=4096
    set rlim_fd_max=4096
    set tcp:tcp_conn_hash_size=32768
    set shmsys:shminfo_shmmax=4294967295
    set autoup=900
    set tune_t_fsflushr=1h4. Set limits on file descriptors
    {color:blue}set rlim_fd_max = 4096{color}
    {color:blue}set rlim_fd_cur = 4096{color}
    Raise the file-descriptor limits to a maximum of 4096. Note that this tuning option was not mentioned in the "Sun Performance And Tuning" book.
    [http://download.oracle.com/docs/cd/E19082-01/819-2724/chapter2-32/index.html]
    h4. Increase the connection hash table size
    {color:blue}set tcp:tcp_conn_hash_size=8192{color}
    Increase the connection hash table size to make look-up's more efficient. The connection hash table size can be set only once, at boot time.
    [http://download.oracle.com/docs/cd/E19455-01/816-0607/chapter4-63/index.html]
    h4. Increase maximum shared memory segment size
    {color:blue}set shmsys:shminfo_shmmax=4294967295{color}
    Increase the maximum size of a system V shared memory segment that can be created from roughly 8MB to 4GB.
    This provides an adequate ceiling; it does not imply that shared memory segments of this size will be created.
    [http://download.oracle.com/docs/cd/E19683-01/816-7137/chapter2-74/index.html]
    h4. Increase memory allocated for dirty pages
    {color:blue}set autoup=900{color}
    Increase the amount of memory examined for dirty pages in each invocation and frequency of file system synchronizing operations.
    The value of autoup is also used to control whether a buffer is written out from the free list. Buffers marked with the B_DELWRI flag (which identifies file content pages that have changed) are written out whenever the buffer has been on the list for longer than autoup seconds. Increasing the value of autoup keeps the buffers in memory for a longer time.
    [http://download.oracle.com/docs/cd/E19082-01/819-2724/chapter2-16/index.html]
    h4. Specify the time between fsflush invocations
    Specifies the number of seconds between fsflush invocations.
    {color:blue}set tune_t_fsflushr=1{color}
    [http://download.oracle.com/docs/cd/E19082-01/819-2724/chapter2-105/index.html]
    Again, note that after adjusting any of the preceding kernel parameters you will need to reboot the Solaris server.
    h3. TCP
    ndd -set /dev/tcp tcp_time_wait_interval 60000
    ndd -set /dev/tcp tcp_conn_req_max_q 16384
    ndd -set /dev/tcp tcp_conn_req_max_q0 16384
    ndd -set /dev/tcp tcp_ip_abort_interval 60000
    ndd -set /dev/tcp tcp_keepalive_interval 7200000
    ndd -set /dev/tcp tcp_rexmit_interval_initial 4000
    ndd -set /dev/tcp tcp_rexmit_interval_max 10000
    ndd -set /dev/tcp tcp_rexmit_interval_min 3000
    ndd -set /dev/tcp tcp_smallest_anon_port 32768
    ndd -set /dev/tcp tcp_xmit_hiwat 131072
    ndd -set /dev/tcp tcp_recv_hiwat 131072
    ndd -set /dev/tcp tcp_naglim_def 1h4. Tuning the Time Wait Interval and TCP Connection Hash Table Size
    {color:blue}/usr/sbin/ndd -set /dev/tcp tcp_time_wait_interval 60000{color}
    The tcp_time_wait_interval is how long a connection stays in the TIME_WAIT state after it has been closed (default value 240000 ms or 4 minutes). With the default setting, this socket will remain for 4 minutes after you have closed the FTP connection. This is normal operating behavior. It is done to ensure that any slow packets on the network will arrive before the socket is completely shutdown. As a result, a future program that uses the same socket number won't get confused upon receipt of packets that were intended for the previous program.
    On a busy Web server a large backlog of connections waiting to close could build up and the kernel can become inefficient in locating an available TCP data structure. Therefore it is recommended to change this value to 60000 ms or 1 minute.
    h4. Tuning the maximum number of requests per IP address per port
    {color:blue}ndd -set /dev/tcp tcp_conn_req_max_q 16384{color}
    {color:blue}ndd -set /dev/tcp tcp_conn_req_max_q0 16384{color}
    The {color:blue}tcp_conn_req_max_q{color} and {color:blue}tcp_conn_req_max_q0{color} parameters are associated with the maximum number of requests that can be accepted per IP address per port. tcp_conn_req_max_q is the maximum number of incoming connections that can be accepted on a port. tcp_conn_req_max_q0 is the maximum number of “half-open” TCP connections that can exist for a port. The parameters are separated in order to allow the administrator to have a mechanism to block SYN segment denial of service attacks on Solaris.
    The default values are be too low for a non-trivial web server, messaging server or directory server installation or any server that expects more than 128 concurrent accepts or 4096 concurrent half-opens. Since the ATG application servers are behind a DMZ firewall, we needn't starve these values to ensure against DOS attack.
    h4. Tuning the total retransmission timeout value
    {color:blue}ndd -set /dev/tcp tcp_ip_abort_interval 60000{color}
    {color:blue}tcp_ip_abort_interval{color} specifies the default total retransmission timeout value for a TCP connection. For a given TCP connection, if TCP has been retransmitting for tcp_ip_abort_interval period of time and it has not received any acknowledgment from the other endpoint during this period, TCP closes this connection.
    h4. Tuning the Keep Alive interval value
    {color:blue}ndd -set /dev/tcp tcp_keepalive_interval 7200000{color}
    {color:blue}tcp_keepalive_interval{color} sets a probe interval that is first sent out after a TCP connection is idle on a system-wide basis.
    If SO_KEEPALIVE is enabled for a socket, the first keep-alive probe is sent out after a TCP connection is idle for two hours, the default value of the {color:blue}tcp_keepalive_interval{color} parameter. If the peer does not respond to the probe after eight minutes, the TCP connection is aborted.
    The {color:blue}tcp_rexmit_interval_*{color} values set the initial, minimum, and maximum retransmission timeout (RTO) values for a TCP connections, in milliseconds.
    h4. Tuning the TCP Window Size
    {color:blue}/usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 65535{color}
    {color:blue}/usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 65535{color}
    Setting these two parameters controls the transmit buffer and receive window. We are tuning the kernel to set each window to 65535 bytes. If you set it to 65536 bytes (64K bytes) or more with Solaris 2.6, you trigger the TCP window scale option (RFC1323).
    h4. Tuning TCP Slow Start
    {color:blue}/usr/sinb/ndd -set /dev/tcp tcp_slow_start_initial 4{color}
    tcp_slow_start_initial is the number of packets initially sent until acknowledgment, the congestion window limit.
    h4. Tuning the default bytes to buffer
    {color:blue}ndd -set /dev/tcp tcp_naglim_def 1{color}
    {color:blue}tcp_naglim_def{color} is the default number of bytes to buffer. Each connection has its own copy of this value, which is set to the minimum of the MSS for the connection and the default value. When the application sets the TCP_NODELAY socket option, it changes the connection's copy of this value to 1. The idea behind this algorithm is to reduce the number of small packets transmitted across the wire by introducing a short (100ms) delay for packets smaller than some minimum.
    Changing the value of tcp_naglim_def to 1 will have the same effect (on connections established after the change) as if each application set the TCP_NODELAY option.
    {note}
    The current value of any of the TCP parameters can be displayed with the command ndd get. So to retrieve the current setting of the {color:blue}tcp_naglim_def parameter{color}, simply execute the command:\\
    {color:blue}ndd -get /dev/tcp tcp_naglim_def{color}
    {note}
    h3. References
    Solaris Tunable Parameters Reference Manual
    [http://download.oracle.com/docs/cd/E19455-01/816-0607/index.html]
    WebLogic Server Performance and Tuning
    [http://download.oracle.com/docs/cd/E11035_01/wls100/perform/OSTuning.html]

    For example,
    Socket.setSoTimeout() sets SO_TIMEOUT option and I
    want to what TCP parameter this option corresponds in
    the underlying TCP connection.This doesn't correspond to anything in the connection, it is an attribute of the API.
    The same questions
    arises fro other options from SocketOptions class.setTcpNoDelay() controls the Nagle algorithm. set{Send,Receive}BufferSize() controls the local socket buffers.
    Most of this is quite adequately described in the javadoc actually.

  • Solaris containers, am I thinking in the right way ?

    Hi there,
    I've downloaded Solaris 10 and so far I really like what I see. With all the exploits appearing on Linux (local root exploits in the kernel for example) I'm seriously considering to replace my Debian GNU/Linux environment with Solaris 10. Ofcourse this requires a lot of preperation, which is where I'm now. Slowly but steadily checking and comparing to make sure that Solaris can do everything I'm currently doing on Linux.
    One of the reasons why I think Solaris is appealing are the so called 'containers'. I've already read some faqs which explain the procedure (like the 'partial root' setup where your main directories get "linked" from the root system) but I'm still not too sure what to make of all this.
    Is this setup comparible with a chroot'ed environment or is this comparable (to a certain extend) with the UML (User Mode Linux) setup; where a copy of the Linux kernel runs in userspace, thus making it harder to gain access to the host system ? I know that the container shares the same kernel and memory, but what about the rest ?
    Many websites compare the containers with the BSD Jails, but from what I have read on sun.com there still seem to be many differences.
    So could someone shed a little more light on this subject or, even better, point me to some documentation which gives a little more detail than focussing on the broad basics ?
    Thanks in advance!

    I believe its Like Domains that that you get on the Large machines, its a way of having a number of different environment/virtual machines on one system.

Maybe you are looking for

  • OPEN ORDERS RECEIPTS...

    Hi, I have build a Tree Control to display Materials and all the Suppliers to this Material. When User klick on Material,this Material Node shows all Suppliers for this Material as 2nd Hierarchy. How can we gather all OPEN ORDERS / RECEIPTS ( Amount

  • Collect Multiple IDOC(Single IDOC type) to single file using BPM

    Hi All, When i am generating multiple IDOC for the same message type, i am getting multiple payload for the same as a result i used to get multiple files generated in application server. Now if i use BPM only to collect multiple IDOC to a file then w

  • Menu works on Mac but not in DVD player...any idea why?

    I created a DVD using IDVD6. The burn process, etc. seemed to go fine. When I stick the DVD in the DVD player, the menu does come up but I am only allowed to press play. I am not able to click to the other button (scene selection), however I am able

  • Has anybody experienced the iPhone off switch not working all the time.

    I have an iPhone 5, it's about 16 months old and it has experienced an annoying issue.  The off button does not work all the time, sometimes I have to press it multiple times and other times it just works.  The worst case is when it does not work at

  • Regarding Business System Transport

    Hi, I have some queries with Exporting the Business System between the DEV and QTY servers I have gone through this Help file and have few queries http://help.sap.com/saphelp_nw04/helpdata/en/ef/a21e3e0987760be10000000a114084/content.htm We are in 2