Patching sol10 with non-global zones.

The patch REAMDE says perform in single user mode. if the system is in single user mode, will the non-global zones get patched also?

Hello
I found a couple of docs that can help you, see the one that describe how to create "BE" in a sol10 brandz this is the safest way to patch a sol10 brand zone.
How to Install a Solaris Patchset in a Branded Zone (Doc ID 1489197.1)
And if you are using sol11.1 you can use something similar to LU
How to create and patch a new boot environment in an Oracle Solaris 10 Zone on a Oracle Solaris 11.1 system (Doc ID 1558773.1)
Regards
Eze

Similar Messages

  • Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2

    Dears,
    I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
    i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
    i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
    Appreciate your help

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Sun cluster 3.20, live upgrade with non-global zones

    I have a two node cluster with 4 HA-container resource groups holding 4 non-global zones running Sol 10 8/07 u4 which I would upgrade to sol10 u6 10/8. The root fileystem of the non-global zones is ZFS and on shared SAN disks so that can be failed over.
    For the LIve upgrade I need to convert the root ZFS to UFS which should be straight forward.
    The tricky stuff is going to be performing a live upgrade on non-global zones as their root fs is on the shared disk. I have a free internal disk on each of thenodes for ABE environments. But when I run the lucreate command is it going put the ABE of the zones on the internal disk as well or can i specifiy the location ABE for non-global zones. Ideally I want this to be shared disk
    Any assistance gratefully received

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Sharing software package with non-global zone

    I've installed Solaris Studio 12.3 to a global zone on Solaris 11 following instructions from http://pkg-register.oracle.com which I want to make available to the non-global zones.  Do I need to install it independently for all zones, or can I export/share it?

    All files are normally installed in /opt/solarisstudio12.3. You can share it, but it is recommended to install it independently for all zones if you want different installed versions.

  • What options do I have to patch the recommended patchset on Solaris 10 with a bunch of non-global zones?

    With the standard patching process(installcluster), it takes a looong time since each zone needs veridated. Any option that I can apply the patchset to the global zone only, then later upgrade the non-global zones?
    If possible, I'd like to use LU.

    You can use LU but it will depend of your system config. There are instructions in the README of the patchset to install it on an alternate boot environment (previously created using lucreate).
    If you plan to use LU, read the following docs first to avoid common issues:
    Solaris Live Upgrade Software Patch Requirements(Doc ID 1004881.1)
    List of currently unsupported Live Upgrade (LU) configurations (Doc ID 1396382.1)
    You can also use Parallel Patching feature to improve performance :
    https://blogs.oracle.com/patch/entry/zones_parallel_patching_feature_now
    Solaris 10 10/09: Zones Parallel Patching to ReducePatching Time (System Administration Guide: Oracle Solaris Containers…
    What you can't do is patch the global zone only and the non-global zones later (unless the zones are detached). It's a requirement that the global and non-global stay synchronize at all time (considering that they are sharing the same kernel).

  • After installing 137137-09 patch OK in global zone, bad in non global zone

    Hi all,
    scratching my head with this one.
    Installed 137137-09 fine on Sun Fire V210. Machine has one non global zone running a proxy server (nothing very exciting there!). non global zone has a local filesystem attached, but don't think this is the issue (on my test V210 I created the same sort of filesystem and was unable to replicate the problem :( ).
    So 137137-09 is fine in the global zone (I had the non global zone halted when patch installed) it is also installed in the non global zone (ie, when zone boots it says it's at rev 137137-09 via uname) in the patch log in the non global zone I get this:
    PKG=SUNWust2.v
    Original package not installed.
    pkgadd: ERROR: ERROR: unable to get zone brand: zonecfg_get_brand: No such zone configured
    This appears to be an attempt to install the same architecture and
    version of a package which is already installed. This installation
    will attempt to overwrite this package.
    /usr/local/zones/cotchin/lu/dev/.SUNW_patches_1000109009-1847556-000000d3e42faa84/137137-09/FJSVcpcu/install/checkinstall: /usr/local/zones/cotchin/lu/dev/.SUNW_patches_1000109009-1847556-000000d3e42faa84/137137-09/FJSVcpcu/install/checkinstall: cannot open
    pkgadd: ERROR: checkinstall script did not complete successfully
    Dryrun complete.
    No changes were made to the system.
    I'm not sure if the branding error is causing the checkinstall postpatch script error or if they are not related. There doesn't seem to be any obvious permissions problems that I can find. I have checked that all the pkg and patch patches are up to date on the system. Searching on the brand error gives me a link to a problem with 127127-11, but that was installed on the system before the local zone was created and all the other seemingly appropriate patches (eg: 119254) are all up to date or at a higher revision than recommended.
    I see the same problem on a M5000 which has two non global zones on it.
    Both machines had the Solaris 10 50/08 update bundle applied when it came out,a nd have had recommended patch sets applied at regular intervals since.
    This issue only came to light when trying the latest bundles with 138888-01/02 in it, and those fail to install on the global zones because the non global zone install dies claiming 137137-09 is not installed (which is plainly wrong).
    I've tried to recreate this on a test server but unfortunately everything works as it should, even though the test server has a similar history in terms of patches and original setup to the others.
    I'm planning to try to detatch the non global zone and try an attach -u to see if it will update the patches properly, but I'm not holding out much hope on that one (I need to wait for a mainteiance window when I can take the zone down in a couple of days).
    Any ideas?

    Well, I am following up to my own post it seems I have determined what is causing the problem, or at least situations where the problem can be reproduced which I have been able to do on my test system.
    It seems that if the zone container's zonepath is in /usr (eg: /usr/zones, /usr/local/zones, or some other path under /usr) the patchadd of 137137-09 will fail with the log similar to posted above, and this will stop further kernel patches (eg: 138888-02) being added.
    The test system had everything patched to current and searching the web I can't find any other instances of this being an issue, but I have reproduced this problem on my test machine (which worked OK because it's test zones were in a filesystem mounted as /zones). When I used zoneadd -z <zonename> move to a zone in /usr/local and applied 137137-09 the same problem came up.
    Not sure what is causing this issue.. I imagine it might have to do with some sort of confusion with the patch utilities and the read-only loopback filesystems in the sparse root zone but I can't bs sure.
    Maybe someone at sun will see this and figure out what the deal is :)
    When I moved my test zone back to /zones the patch applied perfectly so it's definitely having it in /usr or /usr/local (I tried both locations, even though they are seperate ufs filesystems on my test server).
    Oh I am running DiskSuite to mirror filesystems on my V210's which may or may not have anything to do with it.
    Hope this helps someone in the future at least!

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • Is it possible to patch Global Zone and only specific Non-Global Zones?

    Hi Champs,
    Is it possible to patch Global Zone and only specific Non-Global Zones? Idea is to patch DEV-zones only on the system & test applications and then patch only the STG-zones on same server!
    Not sure if it is possible but just throwing a question...
    Cheers,
    Nitin

    M10vir wrote:
    Yes, if you have branded (non-sparse) zone!Branded zones and sparse zones don't have the relation that you imply. In Solaris 10, native zones can be sparse or whole-root (non-sparse, as you say). Zones that are not native zones are branded zones. Branded zones on Solaris 10 include Solaris Legacy Containers, previously known as Solaris 8 Containers and Solaris 9 Containers. That add-on product allows you to run Solaris 8 and Solaris 9 application environments under a thin layer of virtualization provided by the brands framework. solaris8 and solaris9 branded zones can be patched independently of each other and of the global zone.
    Solaris 11 has no "native zones" - all zones use the brands framework. The "solaris" brand does no emulation and in that respect is very similar to native zones on Solaris 10. Solaris 11 also provides Solaris 10 Zones via the solaris10 brand. This allows zones or the global zone from a Solaris 10 system to be transferred to a Solaris 11 system and run as solaris10 zones. When running on Solaris 11, solaris10 zones can each be patched independently from each other and the Solaris 11 global zone. Technically, Solaris 11 doesn't have patches - it just has newer versions of packages to which the system is updated.

  • Problem with exporting devices to non-global zone

    Hi,
    I've problem with exporting devices to my solaris zones (i try do add support to mount /dev/lofi/* in my non-global zone).
    A create cfg for my zone.
    Here it is:
    $ zonecfg -z sapdev info
    zonename: sapdev
    zonepath: /export/home/zones/sapdev
    brand: native
    autoboot: true
    bootargs:
    pool:
    limitpriv: default,sys_time
    scheduling-class:
    ip-type: shared
    fs:
    dir: /sap
    special: /dev/dsk/c1t44d0s0
    raw: /dev/rdsk/c1t44d0s0
    type: ufs
    options: []
    net:
    address: 194.29.128.45
    physical: ce0
    device
    match: /dev/lofi/1
    device
    match: /dev/rlofi/1
    device
    match: /dev/lofi/2
    device
    match: /dev/rlofi/2
    attr:
    name: comment
    type: string
    value: "This is SAP developement zone"
    global# lofiadm
    Block Device File
    /dev/lofi/1 /root/SAP_DB2_9_LUW.iso
    /dev/lofi/2 /usr/tmp/fsfile
    I reboot the non-global zone, even reboot global-zone, and after that, in sapdev zone, there is no /dev/*lofi/* files.
    What i do wrong? Maybe I reduce my sol 10 u4 sparc instalation too much.
    Can anybody help me?
    Thanks for help,
    Marek

    I experienced the same problem on my system Sol 10 08/07.
    Normally, when the zone enters the READY state during boot, it's zoneadmd will run devfsadm -z <zone>. In my understanding this is to create the necessary device files in ZONEPATH/dev.
    This worked well until recently. Now only the directories are still created.
    It seems as if devfsadm -z is broken. Somebody should issue a call to sun.
    As a workaround you can easily copy the device files into the zone. It is important not to copy the symbolic link but the target.
    # cp /dev/lofi/1 ZONEPATH/dev/lofi
    Hope this helps,
    Konstantin Gremliza

  • Oracle 10 g non-global zones with asynchronous I/O

    Hi,
    I note that using direct I/O (by setting the forcedirectio while
    mounting the database file systems) and bypassing the file system
    cache may improve database performance significantly, but this should
    be done only for file systems in which database files and redo log
    files exist. If direct I/O is used and there is not enough database
    buffer cache, it may even decrease the performance by moving the
    problem from double buffering to a lack of database buffer cache. So,
    this performance tuning must be planned carefully, and the database
    buffer cache should be sized properly. The direct I/O option should
    not be used for other file systems used by other applications because
    they still need the UFS buffer cache.
    Now, I have Oracle database installed inside a non-global zone and I
    see a lot of Asynchronous I/O wait warnings in the Oracle Alert log
    file. Storage mount points with UFS filesystem contain the Oracle
    datafiles and redo log files. In addition, two Oracle datafiles of 10
    GB each reside on the local disks. The Oracle init.ora parameter to
    set asynchronous I/O for Oracle database files is
    FILESYSTEMIO_OPTIONS= SETALL.
    Although the above parameter was set during the database installation,
    the aiowait warnings don't seem to disappear.
    Can I use the "forcedirectio" option at the Operating System /etc/
    vfstab file for Oracle datafiles and redo log files?
    Or, should I just move the Oracle database files residing on the local
    disks to the external storage? Will this take care of aiowait warnings
    and if yes, how? The storage is a DAS.
    Regards
    Sandeep

    I presume you compiled php on the Sun server, was this done using gcc or the Sun One C compiler.
    If the latter then you can also use the flag: --enable-nonportable-atomics when you run configure                                                                                                                                                                                                                                                                                                                                                                                                   

  • Netbackup with Solaris non-global zone!

    Hi,
    How to install and configure netbackup into Solaris 10 non-global zone? what steps need to follow?
    Thanks
    Tanvir

    I agree with running from the global zone. The added benefit is that if you backup the root of all zonepaths, then when you add any new non-global within that path, the new server will be automatically backed up.
    We had been installing the client on each server both global and non-global in the past. On our non-global zones, /usr is not writeable but /opt is. We would symlink /usr/openv to /opt/openv from the global and then remotely install the client software from the backup master via
    "/usr/openv/netbackup/bin/install_client_files ssh <client>"

  • How to retrieve #  on-line procs in a non-global zone with resource pool

    Is there any way to retrieve the #of on line processors of the machine running in a non global zone with resource pool ?
    sysconf does not return this value. In fact this is an excerpt of the man:
    "If the caller is in a non-global zone and the pools facility is active, sysconf(_SC_NPROCESSORS_CONF) and sysconf_SC_NPROCESSORS_ONLN) return the number of processors in the processor set of the pool to which the zone is bound."

    So, from within a local zone that's in a pool (i.e. in a pool with 8 CPUs) , you want to query how many CPUs really exist in the global zone (i.e. the global zone may actually have 16 CPUs)? I don't think that's possible: in fact for security reasons it's probably intentionally disabled.
    A quick workaround would be a script/cron-job in the global zone that writes a small file in the filesystem of the local zone... then from within that zone you could read the CPU count.
    I'm interested though: what are you trying to set up?
    Regards,
    [email protected]

  • Using a Fibre Channel HBA with a non-global zone.

    I am trying to let a non-global zone use a dual port HBA. Please note the goal is to use the HBA including the SAN devices, not just a device on the SAN. Does anyone know if and how this can be done?
    [root@global:/]# more /etc/release
                           Solaris 10 8/07 s10s_u4wos_12b SPARC ...
    [root@global:/]# zonecfg -z localzone info
    zonename: localzone
    zonepath: /zones/localzone
    brand: native
    autoboot: true
    bootargs:
    pool:
    limitpriv:
    scheduling-class:
    ip-type: shared
    net:
            address: x.x.x.x/24
            physical: qfe0
    device
            match: /dev/fc/fp[0-1]
    device
            match: /dev/cfg/c[1-2]
    device
            match: /dev/*dsk/c[1-2]*
    [root@global:/]# fcinfo hba-port
    HBA Port WWN: 210000e08b083b41
            OS Device Name: /dev/cfg/c1
            Manufacturer: QLogic Corp.
            Model: QLA2342
            Firmware Version: 3.3.24
            FCode/BIOS Version: No Fcode found
            Type: N-port
            State: online
            Supported Speeds: 1Gb 2Gb
            Current Speed: 2Gb
            Node WWN: 200000e08b083b41
    HBA Port WWN: 210100e08b283b41
            OS Device Name: /dev/cfg/c2
            Manufacturer: QLogic Corp.
            Model: QLA2342
            Firmware Version: 3.3.24
            FCode/BIOS Version: No Fcode found
            Type: N-port
            State: online
            Supported Speeds: 1Gb 2Gb
            Current Speed: 2Gb
            Node WWN: 200100e08b283b41
    [root@localzone:dev]# ls fc
    fp0  fp1
    [root@localzone:dev]# ls cfg
    c1  c2
    [root@localzone:dev]# ls dsk | grep s0
    c1t500601613021934Dd0s0
    c1t500601693021934Dd0s0
    c1t50060482D52D5608d0s0
    c1t50060482D52D5626d0s0
    c2t500601613021934Dd0s0
    c2t500601693021934Dd0s0
    c2t50060482D52D5608d0s0
    c2t50060482D52D5626d0s0
    [root@localzone:dev]# ls rdsk | grep s0
    c1t500601613021934Dd0s0
    c1t500601693021934Dd0s0
    c1t50060482D52D5608d0s0
    c1t50060482D52D5626d0s0
    c2t500601613021934Dd0s0
    c2t500601693021934Dd0s0
    c2t50060482D52D5608d0s0
    c2t50060482D52D5626d0s0
    [root@localzone:dev]# fcinfo hba-port
    No Adapters Found.

    You cannot present devices directly to the NGZ ( What a mouth/handful of words to say/type...sheesh! What's wrong with local zones, sun?)
    You can present filesystems and/or ZFS pools but not HBAs or other devices directly (AFAIK)

  • Can I upgrade patches to non-global zones separate from a global zone?

    Normally, one would assume that you want to keep global and non-global zones in sync. However, at the software company I work for we could potentially want to test on different patch levels of Solaris10 simultaneously. I can't bring down the global zone and change it's patch set everytime I would need this. My only option would be to have separate hardware and separate global zone for each patch set which kinda defeats the purpose IMHO.
    Anybody out there know if this is possible?

    Whole root zones allow you to have different levels of an application installed in different zones.
    But they don't really provide a good mechanism for testing different patch levels of solaris itself.
    Since theres really only one copy of solaris running, its just providing different views of itself.
    If you want to actually test solaris patch levels you need to do "real" virtualisation rather than para virtualisation provided by zones.
    So either somethig like ldoms on sparc hardware, or vmware or equivalent on x86.

  • Can I import one non-global zone from one machine to another?

    If create a non-global zone on one disk on machine A, is it possible to make a copy of that disk, and import the non-global zone to machine B? If yes, how to import the non-global zone?
    Thanks!

    It should be possible if your machines are installed at the same way, because you need the same environment (patches, packages,..).
    If this is true you should export your zone definition on machine A (zonecfg export) and import it on machine B (zonecfg -f ...).
    Then create the new zone on B. If finished get your zonepath with all data on A an copy it to B. That should be all.
    With this solution I hope it would be possible to have a shadow instance on B and the aktiv instance on A. If you have your whole zonepath on external disks like EMC, you only have to mount your disks on B and start your zone.
    harruh

Maybe you are looking for

  • Multiple file uploads, or only one at a time?

    I have multiple files in folders on my iPad.  I want to upload entire folders of documents.  So far I have only been able to upload one document at a time.  Then after uploading, I have to put them in acrobat.com folders one document at a time.  It i

  • Unable to acheive database connectivity in my servlet prog.....

    HELLO ALL...... IM USING ORACLE 9I(ORAHOME 90) FOR MY DATABASE CONNECTIVITY..PLUS IM USING TOMCAT 5.5.7 ..... IM TRYING TO ACHIEVE CONNECTIVITY USING JDBCODBC BRIDGE....THAT IS DRIVER OF THE FIRST TYPE.....IM NOT USING THIN DRIVER.....BUT SOMEHOW MY

  • Problem installing Flash 10.1.85.3

    I have been trying to install Flash Player 10.1.85.3 and cannot get it installed.  I tried to update and it did not work.  I tried using the alternate installer on the Adobe help page and it acted like it loaded but it does not work.  I then tried to

  • Getting Dump from a select query

    Dear All, Am selecting records from KEKO  and storing into  E1KEKO  internal table. But am getting dump error. I need to send those E1KEKO records as idoc. Kindly help me out how to fix this dump.   SELECT * INTO CORRESPONDING FIELDS OF e1keko     FR

  • Reports not showing.

    Hi guys Just wondering if i coudl get a bit of help please. We can not seem to get any reports from Crystal reports. Below is the process that is followed. From 4Series Parties go into party you need the valuation for. Then choose: u2022     Portfoli