OSB functionality with ASM managed flash_recovery_area

Is there a way to backup ASM file system using OSB Datasets?
Its like this our RMAN backups are located in the path: ++FRA/BACKUPSETS/PROD+
Then I would like to backup this path using OSB file system backup? I am getting an error:
"the following problems were detected in dataset /Level_0_FS_Bkp:14: include path +FRA/BACKUPSETS/PROD invalid include pathname"

Have you created any database backup storage selectors? It sets up a policy for defining what media and drives to use for the backup. You can create one database backup storage selector for all databases or have multiple per database. You can create a policy for what media full backups go to and another for incremental backups etc. OSB will then automatically match the policy to the backup.
Regarding the duplication scenario, I would do the following:
1) Create a media family just for the duplicate tapes ie Offsite
2) Create a duplication policy which defines when tapes should be duplicated, and to what media family etc .... ie dup1
3) Create a schedule for volume duplication scan
4) Create a rotation policy for how you want the tape moved between sites (you need to have created a location(s) for where tapes will reside)
5) Create a rotation scan schedule
6) Associate the duplication policy with your original media families ie level0_family (note one duplication policy can be associated with multiple media families)
7) Associate the rotation policy with the media family you created for your offsite tapes ie Offsite
Now, you'll have policy based management for the tapes....so it's automated. This is described in more detail in:
http://www.oracle.com/technetwork/database/secure-backup/learnmore/osb-103-twp-166804.pdf
Donna

Similar Messages

  • CCMS-functinality with Solution Manager 4.0

    Hello,
    I am seeking for the documentation regarding the setup and use the CCMS-functionality with Solution Manager 4.0
    Is there some documents describing this feature in details?
    Thank you!
    regards
    Axel

    Hi,
    Check <a href="https://websmp108.sap-ag.de/sapidb/011000358700005479192006E.sim">this.</a> and <a href="https://websmp108.sap-ag.de/sapidb/011000358700006936062005E.sim">this</a>.
    This will solve your problem.
    Feel free to revert back.
    --Ragu
    Message was edited by:
            Raguraman C

  • Does managing Oracle 10g RAC with ASM require full root access?

    We currently have three entirely separate support areas, Unix, Storage and DBA. We're now considering using Oracle 10g RAC with ASM and as part of the assessment trying to work out if we can still draw similar support boundaries. I know that installing RAC and configuring ASM requires root access but will DBA continue to need root access to manage & support RAC? If so, does anyone know if the commands they need can be RBAC'd or if we just need to share root access going forwards. I've had a look at a number of docs including http://download.oracle.com/docs/cd/B19306_01/rac.102/b14197/toc.htm. which is fairly informative but none of them seem to mention the requirement for root access on Solaris. I'm guessing they just assume that it's available, but that's not generally the case in our environment.
    All advice / info welcome!

    I would have thought that the only reason you would need root access once RAC and ASM had been set up would be to add more disks to the ASM configuration. This would be needed to change the ownership on the raw LUNs or to make additions to metasets (SVM/Oban) or diskgroups (VxVM/CVM). Beyond that, I can't imagine needing root access.
    I'm sure others will chime in if they can think of other reasons!
    Regards,
    Tim
    Edited by: Tim.Read on Jun 3, 2008 2:25 AM

  • DNFS with ASM over dNFS with file system - advantages and disadvantages.

    Hello Experts,
    We are creating a 2-node RAC. There will be 3-4 DBs whose instances will be across these nodes.
    For storage we have 2 options - dNFS with ASM and dNFS without ASM.
    The advantages of ASM are well known --
    1. Easier administration for DBA, as using this 'layer', we know the storage very well.
    2. automatic re-balancing and dynamic reconfiguration.
    3. Stripping and mirroring (though we are not using this option in our env, external redundancy is provided at storage level).
    4. Less (or no) dependency on storage admin for DB file related tasks.
    5. Oracle also recommends to use ASM rather than file system storage.
    Advantages of DNFS(Direct Network File System) ---
    1. Oracle bypasses the OS layer, directly connects to storage.
    2. Better performance as user's data need not to be loaded in OS's kernel.
    3. It load balances across multiple network interfaces in a similar fashion to how ASM operates in SAN environments.
    Now if we combine these 2 options , how will be that configuration in terms of administration/manageability/performance/downtime in future in case of migration.
    I have collected some points.
    In favor of 'NOT' HAVING ASM--
    1. ASM is an extra layer on top of storage so if using dNFS ,this layer should be removed as there are no performance benefits.
    2. store the data in file system rather than ASM.
    3. Stripping will be provided  at storage level (not very much sure about this).
    4. External redundancy is being used at storage level so its better to remove ASM.
    points for 'HAVING' ASM with dNFS --
    1. If we remove ASM then DBA has no or very less control over storage. He can't even see how much is the free space left as physical level.
    2. Stripping option is there to gain performance benefits
    3. Multiplexing has benefits over mirroring when it comes to recovery.
    (e.g, suppose a database is created with only 1 controlfile as external mirroring is in place at storage level , and another database is created with 2 copies (multiplexed within Oracle level), and an rm command was issued to remove that file then definitely there will be a time difference between restoring the file back.)
    4. Now familiar and comfortable with ASM.
    I have checked MOS also but could not come to any conclusion, Oracle says --
    "Please also note that ASM is not required for using Direct NFS and NAS. ASM can be used if customers feel that ASM functionality is a value-add in their environment. " ------How to configure ASM on top of dNFS disks in 11gR2 (Doc ID 1570073.1)
    Kindly advise which one I should go with. I would love to go with ASM but If this turned out to be a wrong design in future, I want to make sure it is corrected in the first place itself.
    Regards,
    Hemant

    I agree, having ASM on NFS is going to give little benefit whilst adding complexity.  NAS will carrying out mirroring and stripping through hardware where as ASM using software.
    I would recommend DNFS only if NFS performance isn't acceptable as DNFS introduce an additional layer with potential bugs!  When I first used DNFS in 11gR1, I came across lots of bugs and worked with Oracle Support to have them all resolved.  I recommend having read of this metalink note:
    Required Diagnostic for Direct NFS Issues and Recommended Patches for 11.1.0.7 Version (Doc ID 840059.1)
    Most of the fixes have been rolled into 11gR2 and I'm not sure what the state of play is on 12c.
    Hope this helps
    ZedDBA

  • Please verify my RAC install with ASM procedure with a question.

    Good morning.
    My installation procedure on RAC with ASM (ASM will be on separate home from DB):
    1. Install clusterware 10.2.0.1
    2. Apply database patchset 6810189 upgrade clusterware to 10.2.0.4
    3. Apply CRS patch bundle #4 8436582 to clusterware
    4. Install oracle DB home 10.2.0.1
    5. Apply database patchset 6810189 upgrade to 10.2.0.4
    6. Apply DB PSU (patch# 8576156) 10.2.0.4.1 (includes CPUJUL2009) to the oracle home
    (steps 7-9 are similar as step 4-5, except for ASM home. Or should I install ASM home before installing DB home? )
    7. Install oracle DB home 10.2.0.1 for ASM and configure ASM
    8. Apply database patchset 6810189 upgrade to 10.2.0.4 for ASM
    9. Apply DB PSU (patch# 8576156) 10.2.0.4.1 (includes CPUJUL2009) to the oracle home for ASM
    10. Run DBCA to create database.
    Can I apply DB PSU (patch# 8576156) 10.2.0.4.1 (INCLUDES CPUJUL2009) to DB while the clusterware is 10.2.0.4, as I know DB version should not be higher than clusterware version? Or can I upgrade clusterware to 10.2.0.4.1?
    Thanks for advice,

    Hi,
    Applying PSU (10.2.0.4.1) does not mean that you are moving to higher version of Oracle. Oracle releases will be like 10.2.0.4 , 10.2.0.5 like that.
    10.2.0.4.1 is the PSU version for Jul 2009 PSU.
    10.2.0.4.2 is the PSU version for Oct 2009 PST.
    Like CPUs PSUs are also quarterly. It is not necessary oracle should release the PSU/CPU for every oracle home/CRS home every quarter.
    For now, the PSU (Patch no: 8576156) is only applicable as shown below.
    PSU 10.2.0.4.1 includes the following previously released bundle patches:
    - Generic Recommended Bundle #4 (Patch 8362683)
    - RAC Recommended Bundle #3 (Patch 8344348)
    - Data Guard Broker Recommended Bundle #1 (Patch 7936793)
    - Data Guard Physical/Recovery Recommended Bundle #1 (Patch 7936993)
    - Data Guard Logical Recommended Bundle #1 (Patch 7937113)
    Refer to the installation types of this patch (download README of this patch bundle).
    Table 1 Installation Types and Security Content
    Server homes - PSU 10.2.0.4.1
    Client-Only Installations - None
    Instant Client Installations - None
    (The Instant Client installation is not the same as the client-only Installation.
    For additional information about Instant Client installations, see Oracle Database Concepts.)
    ASM (Automatic Storage - PSU 10.2.0.4.1
    Management) homes
    CRS (Cluster Ready Services) - None
    homes
    So, there is no PSU patch for this quarter for CRS Home. You may check the previous CPU/PSUs to make your home secured/updated with all patches.
    Regards,
    Vasu
    Edited by: vasu77 on Sep 22, 2009 3:01 PM

  • Oracle 10g RAC design with ASM and OCFS

    Hi all,
    I have a question about a proposed Oracle 10g Release 2 RAC design for a 2 node cluster.
    ASM can store database files but not Oracle binaries nor OCR and voting disk. As such, OCFS version 1 does not support a shared Oracle Home. We plan to use OCFS version 2 with ASM version 2 on Red Hat Linux Enteprrise Server 4 with Oracle 10g Release 2 (10.2.0.1).
    For OCFS v2, a shared Oracle home and shared OCR and voting disk are supported. My question is does the following proposed architecture make sense for OCFS v2 with ASM v2 on Red Hat Linux 4?
    Oracle 10g Release 2 on Red Hat Enterprise Linux Server 4:
    OCFS V2:
    - shared Oracle home and binaries
    - shared OCR and vdisk files
    - CRS software shared OCFS v2 filesystem
    - spfile
    - controlfiles
    - tnsnames.ora
    ASM v2 with ASMLib v2:
    Proposed ASM disk groups:
    - data_dg for application data
    - backupdg for flashback and archivelogs
    - undo_rac1dg ASM diskgroup for undo tablespace for racnode1
    - undo_rac2dg ASM diskgroup for undo tablespace for racnode2
    - redo_rac1dg ASM diskgroup to hold redo logs for racnode1
    - redo_rac2dg ASM diskgroup to hold redo logs for racnode2
    - temp1dg temp tablespace for racnode1
    - temp2dg temp tablespace for racnode2
    Does this sound like a good initial design?
    Ben Prusinski, Senior DBA

    OK Tim, thanks for advices.
    I think Netbackup can be integrated with RMAN but I don't want to loose time on this (political).
    To summarize:
    ORACLE_HOME and CRS_HOME on each node (RAID1 and NTFS)
    Shared storage:
    Disk1 and disk 2: RAID1: - Raw partition 1 for OCR
    - Raw partition 2 for VotingDisk
    - OCFS for FLASH_RECOVERY_AREA
    Disk3, disk4 and disk5: RAID 0 - Raw with ASM redundancy normal 1 diskgroup for database files.
    This is a running project here, will start testing the design on VMware and then go for production setup.
    Regards

  • How to Create Manual Standby for a Oracle 11g RAC with ASM to Single Instan

    Hi All,
    I have a task to configure a Single Instance Standby Database with ASM for 2-node Primary RAC+ASM database.
    Version using : 11.1.0.6 Oracle 11g --- *" STANDARD EDITION "* Please note datagaurd will not be supported in standard edition.
    Primary database - 2 -Node RAC using ASM storage (All datafiles, redologs, controfile and archive logs)
    Need to setup a single instance standby database manually and than using scripts to transfer the archive logs from primary server to standby server to do recovery time to time.
    Please let me know if there are any configuration document(s) which can help me to set the manual standby? or need your kind help to give your valuable ideas how to go in above situation.
    Thanks in advance

    Niall Litchfield wrote:
    El DBA wrote:
    Well if the archive logs are stored in ASM on the primary nodes, you almost definitely want to be using RMAN to access them. Then to transfer between primary and standby there are many options, this is essentially the step that Data Guard takes care of - so if you (and I) are running Standard Edition, this is the part to "worry" about.
    I'm not really sure what you mean by this:
    As in standard edition the archive destination will be on ASMI don't think the Edition of Oracle (Standard or Enterprise) dictates where and how you store your archive logs, but it's possible I've misunderstood what you mean...SE dictates that database storage for RAC will be ASM, including archive log files. Anything else is not officially supported. I haven't tested but it may be that you can use LOG_ARCHIVE_DUPLEX_DEST to specify a local filesystem for archive log files as well.
    If that doesn't work you'll have to script rman "backup as copy" jobs regularly and then transport the results of that.
    Niall Litchfield
    http://www.orawin.info/
    By the way, it seems Robert has been pretty helpful, it's polite to give him some points dude. And since this is your thread, not mine, give him a "helpful" from me too :p
    El DBA
    Incidentally if you are forced down the RMAN backup as copy route then this will likely throw your current backup retention strategy somewhat, you'll need to think carefully about when an archivelog can be deleted following backup, how many times it might be backed up and so on. I do understand, and have argued for, the use of SE RAC. I also understand and have argued for and implemented manually managed standby a number of times. However by the time your primary database is a RAC instance and you need a standby for DR etc then you really have to look very seriously at whether Standard Edition is still the right investment or not. You will be spending a lot of DBA time managing and troubleshooting this, and you will find that it is less reliable than the off the shelf solution. I suspect that you are very close to the point where an EE installation becomes a worthwhile investment here.
    Niall Litchfield
    http://www.orawin.info/

  • Big Problem With Solution Manager EHP1 ?

    Dear all,
    I have already installed SAP Solution Manager EHP1, after the installation sucessfuly. I check the function: Project Administration ( T code : Solar_Project_Admin), then select New project button.
    The program inform that:
    "Runtime Errors: Message Type Unknow
    What happened:
    Error in ABAP Application program
    The current ABAP program "SAPLSPROJECT_SYSTEM_LANDSCAPE" had to be terminated because
    it has come across a statement that unfortunately cannot be excuted.
    etc
    Do you know about this problem, what is the errors ?
    Thanks all.

    Dear all.
    With Solution Manager EHP1
    SAP Basis: SAPKB7001-SAPKB70016, SAPKB70101-SAPKB70102
    SAP ABA: SAPKA7001-SAPK70016, SAPKA70101-SAPKA70102
    Detail log as below:
    Runtime Errors         MESSAGE_TYPE_UNKNOWN
    Date and Time          28.04.2009 16:47:15
    Short text
    Message type " " is unknown.
    What happened?
    Error in the ABAP Application Program
    The current ABAP program "SAPLSPROJECT_SYSTEM_LANDSCAPE" had to be terminated
    because it has
    come across a statement that unfortunately cannot be executed.
    What can you do?
    Note down which actions and inputs caused the error.
    To process the problem further, contact you SAP system
    administrator.
    Using Transaction ST22 for ABAP Dump Analysis, you can look
    at and manage termination messages, and you can also
    keep them for a long time.
    Error analysis
    Only message types A, E, I, W, S, and X are allowed.
    How to correct the error
    Probably the only way to eliminate the error is to correct the program.
    If the error occures in a non-modified SAP program, you may be able to
    find an interim solution in an SAP Note.
    If you have access to SAP Notes, carry out a search with the following
    keywords:
    "MESSAGE_TYPE_UNKNOWN" " "
    "SAPLSPROJECT_SYSTEM_LANDSCAPE" or "LSPROJECT_SYSTEM_LANDSCAPEU28"
    "SPROJECT_SYSL_CHECK_CENTRAL_SY"
    If you cannot solve the problem yourself and want to send an error
    notification to SAP, include the following information:
    1. The description of the current problem (short dump)
    To save the description, choose "System->List->Save->Local File
    (Unconverted)".
    2. Corresponding system log
    Display the system log by calling transaction SM21.
    Restrict the time interval to 10 minutes before and five minutes
    after the short dump. Then choose "System->List->Save->Local File
    (Unconverted)".
    3. If the problem occurs in a problem of your own or a modified SAP
    program: The source code of the program
    In the editor, choose "Utilities->More
    Utilities->Upload/Download->Download".
    4. Details about the conditions under which the error occurred or which
    actions and input led to the error.
    Information on where terminated
    Termination occurred in the ABAP program "SAPLSPROJECT_SYSTEM_LANDSCAPE" - in
    "SPROJECT_SYSL_CHECK_CENTRAL_SY".
    The main program was "SAPLS_IMG_TOOL_5 ".
    In the source code you have the termination point in line 29
    of the (Include) program "LSPROJECT_SYSTEM_LANDSCAPEU28".

  • Transport Management System with Solution Manager

    Dear all,
    Do you know the function transport management system(TMS) with solution manager, I already search in SAP document but not found this function.
    Do you know how to config solution manager for TMS and documents related with this function please guide for me!
    Thanks all very much !

    Hi,
    Check this link.
    https://websmp103.sap-ag.de/~sapdownload/011000358700000508502008E/ChangeRequestManagement.pdf
    Hope this answers your questions.
    Feel free to revert back.
    -=-Ragu

  • System Monitoring with Solution Manager Ehp1

    Hi,
    I'm Tomas Piqueres, and I'm working in a VAR SAP with Solution Manager.
    Recently, we installed Solution Manager Ehp1 and we are trying to configure it for System Monitoring. When I worked with Solution Manager SP17 I used to go to transaction RZ21 to add the system I wanted to monitoring and then put the SID and RFCs of the system.
    Now with Solution Manager Ehp1, when I create the system in transaction RZ21, first I have to set the Component Type to Be Monitored and then the SID, Message Server Logon Group, the client and user are set automatically, and the password I've set to user CSMREG.
    when I fill all the entries, I can see the RFCs used for the monitoring of the system. Those RFCs are set automatically:
    <SID>_RZ20_COLLECT
    <SID>_RZ20_ANALYZE
    I can't edit those RFCs, so I have to create it manually. I check that RFCs destination works fine and both pass the authorization test, so when I try to save the system at transaction RZ21, I see the following errors:
    <SID>_RZ20_COLLECT_123539Error when opening an RFC connection
    Error during remote call of SAL_MS_GET_LOCAL_MS_INFO function: Error when opening an RFC connection
    Error during remote call of SALC function: Error when opening an RFC connection
    Error during remote call of RFC1 function: Error when opening an RFC connection
    I've been looking for information about those errors and how to monitoring with Solution Manager Ehp1, but I haven't found anything usefull.
    Please, Could you help me?
    Thanks and regards,
    Tomas.

    Tomas,
    I need to configure EWA from my Solman system and I completed the steps (defining and creation of RFC destinations to the target systems from my Solman system).  I downloaded the lates ccmsagent file from the market place based my target system configurations.
    Herewith attaching the logs while I'm trying to check the profile parameter.
    tqaadm@saptqa01:/usr/sap/TQA/SYS/exe/run 5> sappfpar check pf=/usr/sap/TQA/SYS/profile/TQA_DVEBMGS30_saptqa01
    ================================================================================
    ==   Checking profile:     /usr/sap/TQA/SYS/profile/TQA_DVEBMGS30_saptqa01
    ================================================================================
    ***WARNING: Unexpected parameter: DIR_EPS =/usr/sap/trans/EPS/----
    ***WARNING: Unexpected parameter: SAPSECULIB =/usr/sap/TQA/SYS/exe/run/libsapsecu.o
    ***WARNING: Unexpected parameter: abap/buffersize_part1 =1200000
    ***WARNING: Unexpected parameter: auth/auth_number_in_userbuffer =5000
    ***WARNING: Unexpected parameter: dbs/io_buf_size =100000
    ***WARNING: Unexpected parameter: rsau/local/file =/usr/sap/TQA/DVEBMGS30/log/audit/audit_++++++++
    ***WARNING: Unexpected parameter: rsau/selector1/class =35
    ***WARNING: Unexpected parameter: rsau/selector1/severity =2
    ***WARNING: Unexpected parameter: rsdb/rclu/cachelimt =0
    ***ERROR: Size of shared memory pool 40 too small
    ================================================================
    SOLUTIONS: (1) Locate shared memory segments outside of pool 40
                   with parameters like: ipc/shm_psize_<key> =0
    SOLUTION: Increase size of shared memory pool 40
              with parameter: ipc/shm_psize_40 =1472000000
    Shared memory disposition overview
    ================================================================
    Shared memory pools
    Key:   10  Pool
                Size configured.....:   642000000 ( 612.3 MB)
                Size min. estimated.:   637597428 ( 608.1 MB)
                Advised Size........:   640000000 ( 610.4 MB)
    Key:   40  Pool for database buffers
                Size configured.....:  1048000000 ( 999.4 MB)
                Size min. estimated.:  1468229308 (1400.2 MB)
                Advised Size........:  1472000000 (1403.8 MB)
    Shared memories inside of pool 10
    Key:        1  Size:        2500 (   0.0 MB) System administration
    Key:        4  Size:      523648 (   0.5 MB) statistic area
    Key:        7  Size:       14838 (   0.0 MB) Update task administration
    Key:        8  Size:    67108964 (  64.0 MB) Paging buffer
    Key:        9  Size:   134217828 ( 128.0 MB) Roll buffer
    Key:       11  Size:      500000 (   0.5 MB) Factory calender buffer
    Key:       12  Size:     6000000 (   5.7 MB) TemSe Char-Code convert Buf.
    Key:       13  Size:   200500000 ( 191.2 MB) Alert Area
    Key:       16  Size:       22400 (   0.0 MB) Semaphore activity monitoring
    Key:       17  Size:     2672386 (   2.5 MB) Roll administration
    Key:       30  Size:       37888 (   0.0 MB) Taskhandler runtime admin.
    Key:       31  Size:     4806000 (   4.6 MB) Dispatcher request queue
    Key:       33  Size:    39936000 (  38.1 MB) Table buffer, part.buffering
    Key:       34  Size:    20480000 (  19.5 MB) Enqueue table
    Key:       51  Size:     3200000 (   3.1 MB) Extended memory admin.
    Key:       52  Size:       40000 (   0.0 MB) Message Server buffer
    Key:       54  Size:    20488192 (  19.5 MB) Export/Import buffer
    Key:       55  Size:        8192 (   0.0 MB) Spool local printer+joblist
    Key:       57  Size:     1048576 (   1.0 MB) Profilparameter in shared mem
    Key:       58  Size:        4096 (   0.0 MB) Enqueue ID for reset
    Key:       62  Size:    85983232 (  82.0 MB) Memory pipes
    Shared memories inside of pool 40
    Key:        2  Size:    31168040 (  29.7 MB) Disp. administration tables
    Key:        3  Size:   114048000 ( 108.8 MB) Disp. communication areas
    Key:        6  Size:  1064960000 (1015.6 MB) ABAP program buffer
    Key:       14  Size:    28600000 (  27.3 MB) Presentation buffer
    Key:       19  Size:    90000000 (  85.8 MB) Table-buffer
    Key:       42  Size:    13920992 (  13.3 MB) DB TTAB buffer
    Key:       43  Size:    43422392 (  41.4 MB) DB FTAB buffer
    Key:       44  Size:     8606392 (   8.2 MB) DB IREC buffer
    Key:       45  Size:     6558392 (   6.3 MB) DB short nametab buffer
    Key:       46  Size:       20480 (   0.0 MB) DB sync table
    Key:       47  Size:    13313024 (  12.7 MB) DB CUA buffer
    Key:       48  Size:      300000 (   0.3 MB) Number range buffer
    Key:       49  Size:     3309932 (   3.2 MB) Spool admin (SpoolWP+DiaWP)
    Shared memories outside of pools
    Key:       18  Size:     1792100 (   1.7 MB) Paging adminitration
    Key:       41  Size:    25010000 (  23.9 MB) DB statistics buffer
    Key:       63  Size:      409600 (   0.4 MB) ICMAN shared memory
    Key:       64  Size:     4202496 (   4.0 MB) Online Text Repository Buf.
    Key:       65  Size:     4202496 (   4.0 MB) Export/Import Shared Memory
    Key:     1002  Size:      400000 (   0.4 MB) Performance monitoring V01.0
    Key: 58900130  Size:        4096 (   0.0 MB) SCSA area
    Nr of operating system shared memory segments: 9
    Shared memory resource requirements estimated
    ================================================================
    Nr of shared memory descriptors required for
    Extended Memory Management (unnamed mapped file).: 64
    Total Nr of shared segments required.....:         73
    System-imposed number of shared memories.:       1000
    Shared memory segment size required min..: 1472000000 (1403.8 MB)
    System-imposed maximum segment size......: 35184372088832 (33554432.0 MB)
    Swap space requirements estimated
    ================================================
    Shared memory....................: 2050.4 MB
    ..in pool 10  608.1 MB,   99% used
    ..in pool 40  999.4 MB,  140% used !!
    ..not in pool:   34.4 MB
    Processes........................:  716.8 MB
    Extended Memory .................: 8192.0 MB
    Total, minimum requirement.......: 10959.2 MB
    Process local heaps, worst case..: 1907.3 MB
    Total, worst case requirement....: 12866.5 MB
    Errors detected..................:    1
    Warnings detected................:    9
    After checking the profile parameter I tried to run sapccm4x in /run directory but got the below error and I'm not able tomove further.
    Pls have a look at these two and let me know what could I do to proceed further.
    tqaadm@saptqa01:/usr/sap/TQA/SYS/exe/run 5> sapccm4x -R pf=/usr/sap/TQA/SYS/profile/TQA_DVEBMGS30_saptqa01
    INFO: CCMS agent sapccm4x working directory is /usr/sap/TQA/DVEBMGS30/log/sapccm4x
    INFO: CCMS agent sapccm4x config file is /usr/sap/TQA/DVEBMGS30/log/sapccm4x/csmconf
    INFO: Central Monitoring System is [SMP]. (found in config file)
          additional CENTRAL system y/[n] ?   :
    INFO: found ini file /usr/sap/TQA/DVEBMGS30/log/sapccm4x/sapccmsr.ini.
    INFO:
          CCMS version  20040229, 64 bit, multithreaded, Non-Unicode
          compiled at   Jun 28 2010
          systemid      324 (IBM RS/6000 with AIX)
          relno         6400
          patch text    patch collection 2010/1, OSS note 1304480
          patchno       335
    INFO Runtime:
          running on    saptqa01 AIX 3 5 00069A8FD600
          running with profile   /usr/sap/TQA/SYS/profile/TQA_DVEBMGS30_saptqa01
    INFO profile parameters:
          alert/MONI_SEGM_SIZE = 200000000
          alert/TRACE          = 1
          SAPSYSTEM            = 30
          SAPSYSTEMNAME        = TQA
          SAPLOCALHOST         = saptqa01
          DIR_CCMS             = /usr/sap/ccms
          DIR_LOGGING          = /usr/sap/TQA/DVEBMGS30/log
          DIR_PERF             = /usr/sap/tmp
    INFO:
          pid           4165682
    INFO: Attached to Shared Memory Key 13 (size 200141728) in pool 10
    INFO: Connected to Monitoring Segment [CCMS Monitoring Segment for application server saptqa01_TQA_30, created with version CCMS version 20040229, 64 bit single threaded, compiled at Oct  3 2008,  kernel 6400_20020600_254,  platform 324 (IBM RS/6000 with AIX)]
            segment status     ENABLED
            segment started at Tue Sep 14 09:35:56 2010
            segment version    20040229
    ERROR: Shared Memory misconfiguration ==> can not monitor SAP application server saptqa01_TQA_30
           Dispatcher Admin Shared Memory (Key 01) and CCMS Shared Memory (Key 13) both in pool 10.
           Please change configuration with profile parameters
               ipc/shm_psize_01 = -<different pool nr>
           xor
               ipc/shm_psize_13 = -<different pool nr>
    EXITING with code 1

  • MTS with batch management, serialization and Handling unit

    Hello All,
    I am testing a scenario for MTS with batch management, serialization and Handling unit for discrete manufacturing.
    Everything worked fine till I created the Handling unit for the finished product.
    The production order has a quantity of 3 EA.
    It has three serial numbers 1, 2 and 3. (serial numbers can be displayed from order->Header->serial numbers)
    I created one Handling unit for production order quantity of 3 EA.
    I tried to do a goods receipt for the production order using transaction COWBHUWE.
    I get the following error when I try to post the GR:
    Only 0 serial numbers entered instead of 3
    Message no. IO304
    Diagnosis
    There is a serial number obligation, so the number of serial numbers must equal the number of serial numbers in the material document.
    You can post the operation only if you entered the correct number of serial numbers previously.
    System Response
    Depending on the context in which the error arises, the system continues processing, or the required function cannot be performed.
    Procedure
    You have the following options, for example:
    Check that the serial numbers are entered fully.
    If necessary, display an error log.
    If necessary, contact your system administrator.
    What did I miss?
    How to fix this problem?
    Please help.
    Thanks in advance
    George

    I added the serialization procedure HUSL to the serial number profile and it fixed the problem.

  • Packing handling units to outbound delivery with batch management

    Hello all,
    I am packing an outbound delivery with a material that has batch management active. The packing is from a storage location that manages handling units (HU-managed sloc).
    My problem is u2013 before entering the HUs to the delivery u2013 the system asks me to enter the batches (in the batch split function). Only then I am allowed to enter the HUs in the packing function.
    What could be the reason for this preliminary batch entries ?
    How can I set the system not to do that, i.e., I would like to be able to enter the HUs directly without entering the batchs separately.
    Thanks,
    Isaac

    Dear Isaac,
    this system behaviour can not be changed in standard and is also not recommended to change it via a modification. The reason for this system behaviour is to avoid inconsistencies in the system. The HU that you assign to the delivery, should exactly fit to the delivery. In case if the material is batch managed, you should specify the batch first, and then add the fitting HU with exactly that batch.
    I suppose you checked the longtext for HUDIALOG102, that you get in the described case, but I copy it for you, in case if you did not read it:
    "Diagnosis
    There are items to be handled in batches which have not yet been
    assigned to a batch.
    Batches must be recognized for items with HU-managed storage location
    For items at a storage location that is not HU-managed, the item type
    determines if packing will take place at cumulated item quantity level
    (meaning that the batch is not recognized at any point in the handling
    unit) or if packing must take place at batch level.
    System Response
    If an item requires batch identification, that item will not be
    suggested for packing as long as the batch is not recognized.
    Procedure
    Go to the maintenance of the delivery item and assign at least one batch
    to the item."
    Regards,
    Ely

  • Accessing attachments stored in Open text (ECC) with Work manager 6.0

    Hello Experts,
    We are using Open text in one of our ECC systems to store documents, files, pdfs and other attachments for the Work Orders and Notifications.
    We want to configure the Work Manager 6.0 application hosted on SMP2.3, in order to fetch all the Work orders in the mobile device.
    Now the question is - Can we access the attachments on the mobile device, from the out of the box Work Manager configuration.
    Or, is there a need to install any plug-ins in the ECC/Agentry side to enable the same.
    Thanks,
    Abhishek

    Hi Abhishek,
    This isn't out of the box functionality for Work Manager so you'll need to enhance the existing functionality.
    To access OpenText Attachments (via ArchiveLink) in WorkManager try looking at the following->
    1. Get Attachments
    BAPI Wrapper - /SMERP/CORE_DOBDSDOCUMENT_GET
    MDO Class - /SMERP/CL_CORE_KWDOCUMENT_DO (Method - GET)
    You'll see the above GET method calls (If you GOS_ACTIVE flag is true) - GOS_DOCUMENT_GET_INFO
    This is responsilbe for getting the attachment metadata returned to the Work Manager Client ->
    ET_COMPONENTS
    ET_SIGNATURE
    ET_CONNECTIONS
    To get the attachment metadata stored in OpenText via ArchiveLink look at the below Function Modules -
    ARCHIV_GET_CONNECTIONS
    ARCHIVOBJECT_STATUS
    You could enchance the GOS_DOCUMENT_GET_INFO method with a Post-Exit method or Implement the BADI /SMERP/MDO_CORE_DOCUMENTS - END_BDS_FETCH
    2. Create Attachments
    BAPI Wrapper -/SMERP/CORE_DOBDSDOCUMENT_CRT
    MDO Class - /SMERP/CL_CORE_KWDOCUMENT_DO (Method - CREATE)
    This calls the method BDS_DOCUMENT_CREATE to create the file and store it against the Work Order / Notification.
    You could look at following Function Modules ->
    ARCHIVOBJECT_CREATE_TABLE (Create the file on the OpenText server)
    ARCHIV_CONNECTION_INSERT (Link the newly created file with the Work Order / Notification)
    The BADI doesn't provide a corresponding Create method so you could enhance the MDO class /SMERP/CL_CORE_KWDOCUMENT_DO with new method that calls the ArchiveLink Logic instead of the BDS_DOCUMENT_CREATE method. The new method can be configured via the Syclo Config Panel to be called instead of the standard CREATE method.
    Also check out the document - Document Upload & Download handling in SAP Work Manager 6.0
    Section 7 & 8 covers the BAPI Wrappers/Class Handlers & Ehancement Spots for Document handling
    Hope that helps.
    Cheers,
    Stephen

  • Confirmation qnty in sales order with credit Managment

    Hi    SD Experts very Good Morning ..
    Generally in OVB8-- 101 Requirement and 1 Routine   controls , once customer crossed his credit limit automatically Confirm qnty become zeo and and once we release that document in VKM3 Again confirmed qnty will be active mode for the customer .... this is sap Function  once we removed 101 routine it wont work like this
    But ....
    In this link clearly mentioned in sales order -- Fixed qnty/date one field is there if we activate also system confirms stocks 0 in case credit limit customer crossed his limit
    http://monicaradytia.wordpress.com/2014/02/21/fixed-quantity-and-dates-in-sales-order-document/
    So both functionality is same ... a bit confused sir
    please share your dynamic knowledge
    Thanks a  lot
    venu

    """"""Generally in OVB8-- 101 Requirement and 1 Routine   controls , once customer crossed his credit limit automatically Confirm qnty become zeo and and once we release that document in VKM3 Again confirmed qnty will be active mode for the customer .... this is sap Function  once we removed 101 routine it wont work like this"""""
    The routine which you are suggesting is prerequisite to update MRP.
    (please dont mix this with credit management--that is a different concept all together)
    Fixed date & Qty:
    this indicator you can check at transaction level or at configuration level(by default for your sales area)
    The purpose of this indicator is How to confirm or assure your customer(there by he can assure to his consumer)
    Once you activated this indicator means at transaction level/(or Default for your sales area):
    At customer level:
    You are assuring your customer that you will give the required Goods on this specific date.(here the customer priority is not the Question)
    At Business level:
    After activation of this flag,your MRP will be updated and in case if you are rescheduling all your sales orders--this order will be skipped from others--so MRP department need to plan/Procure/Produce these goods on the specific date mentioned.
    As far as i know these two are completely different in their respective Properties.
    Needless to mention here--whether you are confirming or not ,still you can update MRP by using delivery Blocks concept.
    Phanikumar

  • A function with a list of data

    I need to get a list of names from a function so that i can avoid creating seperate functions for each names...
    could anyone give a syntax for it?
    I dont want to have a procedure but a function will be good
    thanks

    I am using crystal reports to show names of a bunch of people. insted of each name comes form different function inside the oracle package, I would like to have a single function with an array of names so that I can return it easily
    eg; board member 1 (from function 1)
    board member 2 (from function 2)
    board member 3 (from function 3)
    I should get something like board member n(from a single function that has n number of members) , so that its easy for managing the list later and reduce the code
    This sounds so silly, but i am new to both crystal and pl/sql....
    please help

Maybe you are looking for