Iscsi or SAN NFS+ASM

hi
my questions look stupid.but its a vary simple question.
Do i need any iscsi or SAN to install clusterware 11gR2 in OEL 5.4 x86_64?
regards

Gagan Arora wrote:
Try as root on
node1
# /etc/init.d/oracleasm createdisk DATA <node2 nfs share>
#/etc/init.d/oracleasm createdisk FRA <node1 nfs share>
on node2
#/etc/init.d/oracleasm scandisks
#/etc/init.d/oracleasm listdisks
DATA
FRA
if you dont want to use nfs you can create iscsi targets on each node and create iscsci initiator on each nodei dnt want iscsi.
>
DID you exactly min this?
[root@rac-1 ~]# mount /dev/sdb10 /u02
[root@rac-1 ~]# vi /etc/exports
[root@rac-1 ~]# exports -a
bash: exports: command not found
[root@rac-1 ~]# export -a
[root@rac-1 ~]# srvice nfs restart
bash: srvice: command not found
[root@rac-1 ~]# service nfs restart
Shutting down NFS mountd:                                  [  OK  ]
Shutting down NFS daemon:                                  [  OK  ]
Shutting down NFS quotas:                                  [  OK  ]
Shutting down NFS services:                                [  OK  ]
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
[root@rac-1 ~]# mount -t nfs rac-1:/u02 /mnt/rac-1/nfs1
[root@rac-1 ~]# oracleasm listdisks
VOLUME1
VOLUME2
VOLUME3
VOLUME4
[root@rac-1 ~]# oracleasm createdisk VOLUME5 /mnt/rac-1/nfs1
File "/mnt/rac-1/nfs1" is not a block device
[root@rac-1 ~]# oracleasm createdisk VOLUME5 rac-1:/u02
Unable to query file "rac-1:/u02": No such file or directory
[root@rac-1 ~]# i got only:
[root@rac-1 ~]# oracleasm createdisk VOLUME5 /mnt/rac-1/nfs1
File "/mnt/rac-1/nfs1" is not a block device
Edited by: you on Feb 22, 2010 5:35 AM

Similar Messages

  • What is best practice for using a SAN with ASM in an 11gR2 RAC installation

    I'm setting up a RAC environment. Planning on using Oracle 11g release 2 for RAC & ASM, although the db will initially be 10g r2. OS: RedHat. I have a SAN available to me and want to know the best way to utilise that via ASM.
    I've chosen ASM as it allows me to store everything, including the voting and cluster registry files.
    So I think I'll need three disk groups: Data (+spfile, control#1, redo#1, cluster files#1), Flashback (+control#2, redo#2, archived redo, backups, cluster files#2) and Cluster - Cluster files#3. So that last one in tiny.
    The SAN and ASM are both capable of doing lots of the same work, and it's a waste to get them both to stripe & mirror.
    If I let the SAN do the redundancy work, then I can minimize the data transfer to the SAN. The administrative load of managing the discs is up to the Sys Admin, rather than the DBA, so that's attractive as well.
    If I let ASM do the work, it can be intelligent about the data redundacy it uses.
    It looks like I should have LUN (Logical Unit Numbers) with RAID 0+1. And then mark the disk groups as extrenal redundancy.
    Does this seem the best option ?
    Can I avoid this third disk group just for the voting and cluster registry files ?
    Am I OK to have this lower version of Oracle 10gr2 DB on a RAC 11gr2 and ASM 11gr2 ?
    TIA, Duncan

    Hi Duncan,
    if your storage uses SAN RAID 0+1 and you use "External" redundancy, then ASM will not mirror (only stripe).
    Hence theoretically 1 LUN per diskgroup would be enough. (External redundancy will also only create 1 voting disk, hence only one LUN is needed).
    However there are 2 things to note:
    -> Tests have shown that for the OS it is better to have multiple LUNs, since the I/O can be better handled. Therefore it is recommended to have 4 disks in a diskgroup.
    -> LUNs in a diskgroup should be the same size and should have same I/O characteristica. If you now have in mind, that maybe your database one time will need more space (more disks) than you should use a disk size, which can easily be added, without waisting too much space.
    E.g:
    If you have a 900GB database then does it make sense to only use 1 lun with 1TB?
    What happens if the database grows, but only grows slightly above 1TB? Then you should add another disk with 1TB.... You loose a lot of space.
    Hence it does make more sence to use 4 disks á 250GB, since the "disks" needed to grow the disk groups can be extended more easily. (just add another 250G disk).
    O.k. there is also the possibility to resize a disk in ASM, but it is a lot easier to simple add an additional lun.
    PS: If you use a "NORMAL" redundancy diskgroup, then you need at least 3 disks in your diskgroup (in 3 failgroups) to be able to handle the 3 voting disks.
    Hope that helps a little.
    Sebastian

  • San vs asm raid which to adopt

    Hi,
    we have a system using ASM and San drive. San has a hardware raid, while ASM has also do the same thing. Since according to my understand its based on SAME. Strip and Mirror everything. do you think we have multiple raids in the system and we should disable hardware raid. Also i thing hardware raid is raid 4. please correct me if i am wrong
    regards
    Nick

    RAID 1+0 or 0+1 is an implementation of striping and mirroring.
    RAID 5 is an implementation of striping with parity (where the parity stripe is scattered among the disks, with RAID 4 the parity stripes are on specific disks)
    see: http://www.baarf.com
    please mind the baarf website emphasizes on the (write) performance penalty of using RAID levels with parity. Even with SAN's this is still very true. (it's inherent on the way parity is implemented)
    ASM normal redundancy is essentially a mirror implementation. The implementation of ASM normal redundancy differs subtly from the way RAID mirroring works.
    This means having RAID 1+0/1+0 on the SAN level is a way of mirroring, and having ASM normal redundancy is another way of having mirror copies of blocks, which would mean that having RAID mirroring on the SAN level and normal redundancy on the ASM level means you have got 4 copies on the same storage box.
    So I would say there's little benefit of having ASM normal redundancy on top of a mirrored stripeset.
    As an advise what to do: most ASM implementations use external redundancy which means the redundancy of the SAN is used. I think this makes sense.
    Using normal redundancy makes sense when using local (non RAID) disks, or when having multiple SAN's.

  • Placing Rac db redologs members on san disks ASM and local scasi disks

    Hi
    Kindly advice if should I face a performance problem case I placed db redologs members one on ASM and the 2nd on Local server scasi disk.
    Thanks

    As long as you would have the local disk not being into contention due to any other files being present or just being slow, you should be fine. But make sure that you do have the local disk in such a way that it's visible to the other node as well because that would be required in the case of recovery.
    HTH
    Aman....

  • ASM Volumes on thin-provisioned SAN dirtying all blocks

    Hi there, sorry for the x-post from database-general but it was suggested that I do so. Anyhow, we've got 11g (11.1.0.7 with the 6851110 ASM patch recently applied) running on OEL 5 x86_64, with ASM connected to a raw, thin-provisioned ISCSI volume partitioned for DATA and FRA, and in every case where we do so, the SAN device reports within a few weeks that the whole volume has been allocated even though the DB (configured with autoextend on) is only holding about one tenth of the amount of available space on the device. What this means in systems terms is that somehow ASM is marking writes to nearly every block on the drive if only momentarily.
    In the original thread, there was speculation that a process of indexing AUs has led to the dirtying of the whole volume, but this would make more sense if the whole disk had been allocated immediately rather than over the course of a few weeks. My question is: what else could account for this behavior, and what steps can I take to help ensure that ASM behaves correctly in a thin-provisioned volume? (by "correctly" I mean writes contiguous blocks of data and doesn't dirty the whole thing)
    Thanks!

    Hi,
    recently i had some time and did some tests with thin provisioning and ASM.
    I used storage based on Opensolaris with ZFS thin provisioning against a 11g R2 database with 11g R2 ASM running on Windows. I created two LUNs and exported the LUNs via iSCSI. On the ASM side i formed a single disk group with external redundancy of the two LUNs presented and created one big file tablespace with approx 15 GB total size.
    The storage systems shows the LUNs as follows:
    NAME                       PROPERTY       VALUE    SOURCE
    pool1/iscsi-racwin-temp05  volsize        15G      local
    pool1/iscsi-racwin-temp05  usedbydataset  7.45G    -
    pool1/iscsi-racwin-temp06  volsize        15G      local
    pool1/iscsi-racwin-temp06  usedbydataset  7.45G    -You can see: 15 GB total size reported while 7.45 GB are allocated. Thats pretty normal due to the data file created in the disk group.
    During the night i ran a script which imported a schema and dropped it afterwards. The steps were repeated infinitely.
    After more than 24 hours the thin provisioned disks look like this:
    NAME                       PROPERTY       VALUE    SOURCE
    pool1/iscsi-racwin-temp05  volsize        15G      local
    pool1/iscsi-racwin-temp05  usedbydataset  7.47G    -
    pool1/iscsi-racwin-temp06  volsize        15G      local
    pool1/iscsi-racwin-temp06  usedbydataset  7.47G    -As you can see there is a extremely small growth in size (from 7.45 GB to 7.47 GB). I observed this growth shortly after starting the very first import. Subsequent imports did not increased the actual allocated volume size.
    So if we exclude the storage as a source for problems there might be the fact that 11g R1 ASM behaves different than 11g R2 ASM. I have not yet tested this...
    Ronny Egner
    My Blog: http://blog.ronnyegner-consulting.de

  • NFS vs iSCSI IOPS differ - why?

    Hello -
    I recently setup an environment utilizing 8 IO Analyzers each accessing either an iSCSI LUN or NFS Share (but not both at the same time). The secondary virtual disk was set at 30GB.
    For the iSCSI tests, we ran a 50/50 0% random workload and our total IOPS reached 4086.11.
    When we created the second disk on an NFS datastore and ran the same test as above, our total IOPS only reached 583.57 for the same time period (2 hours).  Additionally, latency doubled.
    I checked IOMeter on the guests, and it appeared they were pushing fewer IOPS as well.
    Any ideas as to why we couldn't push as many IOPS using NFS? I would think the amount of IOPS would be the same across tests, regardless of the backend?
    Thanks in advance.

    For those wondering about ZFS and implications on performance, you may want to visit this thread that helped me understand: https://pthree.org/2013/04/19/zfs-administration-appendix-a-visualizing-the-zfs-intent-log/

  • Difference between asm and san

    Dear all Guru's,
    I have a question what is difference between san and asm because san also provide mirroring like asm,why we shuld go for asm.
    Regards,
    Jam

    926840 wrote:
    but i want to know if i don't use asm or use asm storage layer for oracle dbf file?.what is the benefit of using asm storage because some of organization don't use asm storage.There are two basic types of storage for Oracle databases.
    "Cooked" file system. This means the disk is formatted and managed via a specific file system driver. On Windows, that would be ntfs. On Linux, that typically would be something like ext3.
    This file system is managed by the file system driver loaded into the kernel. It does directory and file management, service I/O calls to read and write files, provides caching, and so on.
    ASM does not support cooked file system as it already has a driver that manages that formatted file system drive/partition.
    Raw disks is the second storage type. This means it is unformatted - and as such, you cannot (via standard o/s commands) use the drive to create/update/delete/etc directories and files.
    ASM supports managing such raw (non-cooked) devices. As these devices are directly used by the database, the database can better control and managed device access.
    Simple example. The database writes 10 8KB database blocks to a cooked file system. It has no idea whether those 10 blocks were actually written to disk.. or whether the file system driver has cached that write and all 10 blocks still sit in memory. The database itself also has a cache. So there are now 2 memory caches.. a physical database I/O may actually be a logical file system I/O.. These factors make managing and determining performance complex.
    A raw non-cooked device does not have such a secondary cooked file system cache. If the database does a physical write to disk, the data is actually written to disk. There is a single memory cache that is managed by the database. Less complexity and less ambiguity. And it can also mean better performance.
    Manually managing raw device is however complex. And in the past, problematic as the database directly used the raw device without providing any type of management layer for raw devices. ASM addresses that issue - and addresses it very well.
    Using SAN LUNs as cooked file systems for an Oracle database does not make much sense IMO. SAN LUNs should be used as raw devices for the database - and ASM used as management layer for these devices.

  • NFS performance over ASM?

    In Netapp storage 11gr2 rac on linux using ASM on NFS versus 11gr2 rac on linux using nfs with out ASM. Please advise, which one is better interms of performance for DW. DB going to be around 20TB. Appreciate your reply..

    Hi;
    Please check below links which could heplful for your issue:
    Using NFS with ASM…
    http://www.oracle-base.com/blog/2010/05/04/using-nfs-with-asm/
    Oracle NetApp white paper..
    http://www.oracle.com/technology/products/database/asm/pdf/netapp_asm3329.pdf
    http://kevinclosson.wordpress.com/kevin-closson-index/cfs-nfs-asm-topics/
    Regard
    Helios

  • Can you re-export an nfs mount as an nfs share

    If so what is the downside?
    I'm asking because we currently have an iscsi san and a recent upgrade
    severely degraded iscsi connectivity. consequently can't mount my iscsi
    volumes.
    Thanks,
    db

    Originally Posted by David Brown
    The filer/san NFS functionality is working normally. I can't access
    some of the iscsi luns. Thinking of just using NFS as the backend.
    Which would be a better sub forum?
    Thank you,
    db
    Depending on which Novell OS you are running.... this subform is for NetWare, but I suspect you are using OES Linux.
    I've never tried creating a NCP share on OES for a remote NFS mount on the server. My first guess would be it is not allowed and also not a good practice. You could however, with this situation and if you are running an OES2 or OES 11 Linux server, try configuring an NFS mount on the OES server and then configuring the NCP share on that using remote manager on the server.
    What I would recommend however to see if the iSCSI issue cannot be fixed or worked around.
    Could you describe a bit more of the situation there/what happened and what is not working on that end?
    -Willem

  • Clustered ASM question

    Can you have clustered ASM instance (RAC) with 2 nodes, but each node to use local storage only (local disks only) and somehow share it with other node?
    Hope question is clear enough.
    Let's say I have two physical machines but no money for anything attached (NAS,SAN,NFS). Just those two machines and make them somehow clustered ASM storage.
    Sorry for asking stupid questions ... still newbe
    10gR2 + Linux x86

    I am not aware of anything like that. BUt for testing purpose you can use whats called an openfiler...below is out of the doc
    Powered by rPath Linux, Openfiler is a free browser-based network storage management utility that delivers file-based Network Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework.
    http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_iscsi.html

  • After add (in a wrong way)  a new disk in ASM, cluster don´t start

    Hi,
    Environment:
    Oracle RAC 10gR2
    2 nodes windows 2003 advanced server (node1 called "database1" and node2 called "database2").
    ASM
    san storage
    Case:
    My client has tried to add a new disk (in a environment san) in ASM with the next steps:
    1. database1 and database2 are up.
    2. instance ASM1 from database is up and instance ASM2 from database2 is down (I don´t know why¿¿¿¿¿¿¿??????????).
    3. add a new disc from san storage to instance ASM1. At the end, a lot of errors show in screen due to  instance ASM2 was down.
    4. client tried to up +ASM2 and then , all the cluster crashed.
    5. reboot boths nodes database1 and database2 , and Windows 2003 Server don´t start well (hung up, due to services from oracle cluster are "automatically start"-
    6. disable oracle services in "not automatically start" and windows 2003 server start well.
    7. now oracle rac don´t start, ASM don start, instances don´t start ....
    what can I do? any idea?
    Regards.
    Laura

    Did you try to bring the U1 and it's associated ASM instance up while the other node is shutdown? Are you using Oracle clusterware?

  • Oracle 10g RAC Database Migration from SAN to New SAN.

    Hi All,
    Our client has implemented a Two Node Oracle 10g R2 RAC on HP-UX v2. The Database is on ASM and on HP EVA 4000 SAN. The database size in around 1.2 TB.
    Now the requirement is to migrate the Database and Clusterware files to a New SAN (EVA 6400).
    SAN to SAN migration can't be done as the customer didn't get license for such storage migration.
    My immediate suggestion was to connect the New SAN and present the LUNs and add the Disks from New SAN and wait for rebalance to complete. Then drop the Old Disks which are on Old SAN. Exact Steps To Migrate ASM Diskgroups To Another SAN Without Downtime. (Doc ID 837308.1).
    Clients wants us to suggest alternate solutions as they are worried that presenting LUNs from Old SAN and New SAN at the same time may give some issues and also if re-balance fails then it may affect the database. Also they are not able to estimate the time to re-balance a 1.2 TB database across Disks from 2 different SAN. Downtime window is ony 48 hours.
    One wild suggestion was to:
    1. Connect the New SAN.
    2. Create New Diskgroups on New SAN from Oracle RAC env.
    3. Backup the Production database and restore on the same Oracle RAC servers but on New Diskgroups.
    4. Start the database from new Diskgroup location by updating the spfile/pfile
    5. Make sure everything is fine then drop the current Diskgroups from Old SAN.
    Will the above idea work in Production env? I think there is a lot of risks in doing the above.
    Customer does not have Oracle RAC on Test env so there isn't any chance of trying out any method.
    Any suggestion is appreciated.
    Rgds,
    Thiru.

    user1983888 wrote:
    Hi All,
    Our client has implemented a Two Node Oracle 10g R2 RAC on HP-UX v2. The Database is on ASM and on HP EVA 4000 SAN. The database size in around 1.2 TB.
    Now the requirement is to migrate the Database and Clusterware files to a New SAN (EVA 6400).
    SAN to SAN migration can't be done as the customer didn't get license for such storage migration.
    My immediate suggestion was to connect the New SAN and present the LUNs and add the Disks from New SAN and wait for rebalance to complete. Then drop the Old Disks which are on Old SAN. Exact Steps To Migrate ASM Diskgroups To Another SAN Without Downtime. (Doc ID 837308.1).
    Clients wants us to suggest alternate solutions as they are worried that presenting LUNs from Old SAN and New SAN at the same time may give some issues and also if re-balance fails then it may affect the database. Also they are not able to estimate the time to re-balance a 1.2 TB database across Disks from 2 different SAN. Downtime window is ony 48 hours.Adding and removing LUNs online is one of the great features of ASM. The Rebalance will be perfomed under SAN. No downtime!!!
    If your customer is not entrusting on ASM. So Oracle Support can answer all doubt.
    Any concern .. Contat Oracle Support to guide you in the best way to perform this work.
    >
    One wild suggestion was to:
    1. Connect the New SAN.
    2. Create New Diskgroups on New SAN from Oracle RAC env.
    3. Backup the Production database and restore on the same Oracle RAC servers but on New Diskgroups.
    4. Start the database from new Diskgroup location by updating the spfile/pfile
    5. Make sure everything is fine then drop the current Diskgroups from Old SAN.
    ASM Supports many Terabytes, if you need to migrate 3 Database with 20TB each using this way described above would be very laborious .. .. So add and remove Luns online is one feature that must work.
    Take the approval from Oracle support and do this work using the ASM Rebalance.
    Regards,
    Levi Pereira

  • Shared partition creation (SAN EVA 4400) for OCR/Vote andASM on linux RHEL5

    Hi all,
    We are going to install database (Oracle 10g) with RAC on Linux Red Hat Enterprise 5. The storage we are using is SAN shared storage, out SAN vendor has finished to configure and present the dsk storage (500 Gb) to the 2 servers, the name of the volume group is VG_ORADATA. this volume will be used for OCR, VOting disk and ASM (database file).
    Our problem is to create and configure the shared partition for the 2 servers.
    Here is what i want to do, i want to create:
    ocr_partition (25 Gb) : for OCR
    vote_partition (25 Gb): for Voting disk
    oradata_part1 (150 Gb): for database (ASM)
    oradata_part2 (150 Gb): for database (ASM)
    oradata_part3 (150 Gb): for database (ASM)
    The problem is how to create these partiton an dto mount them for the 2 servers, what is the tools we have to use for this tasks. Can i use LVM (Logical Volume Manager) in linux for this tasks.
    Thanks for the help and need your help as it's very urgent for us.
    Thank you
    Raitsarevo.

    My thoughts.
    1. Your OCR partition size is too high by several orders of magnitudes. Read the docs and size it appropriately.
    2. Your Voting disk partition size is also far too large. Again read the docs and size it appropriately.
    3. If you have RAC you have one and only one database (not 3)
    4. ASM is not a database and requires no room on your SAN. ASM is an instance that must reside on your servers (nodes).
    5. Where are you putting your flashback recovery area, archived redo logs, flashback logs, etc?
    6. Where are you putting your online backup files?
    7. If you have ASM throw the LVM away. It will do nothing for you other than eat CPU.

  • Unable to use device as an iSCSI target

    My intended purpose is to have iSCSI targets for a virtualbox setup at home where block devices are for the systems and existing data on a large RAID partition is exported as well. I'm able to successfully export the block files by using dd and added them as a backing-store into the targets.conf file:
    include /etc/tgt/temp/*.conf
    default-driver iscsi
    <target iqn.2012-09.net.domain:vm.fsrv>
    backing-store /srv/vm/disks/iscsi-disk-fsrv
    </target>
    <target iqn.2012-09.net.domain:vm.wsrv>
    backing-store /srv/vm/disks/iscsi-disk-wsrv
    </target>
    <target iqn.2012-09.net.domain:lan.storage>
    backing-store /dev/md0
    </target>
    but the last one with /dev/md0 only creates the controller and not the disk.
    The RAID device is mounted, I don't whether or not that matters, unfortunately I can't try it being unmounted yet because it is in use. I've tried all permutations of backing-store and direct-store with md0 as well as another device (sda) with and without the partition number, all had the same result.
    If anyone has been successful exporting a device (specifically a multi disk) I'd be real interested in knowing how. Also, if anyone knows how, or if it's even possible, to use a directory as the backing/direct store I'd like to know that as well, my attempts there have been unsuccessful as well.
    I will preempt anyone asking why I'm not using some other technology, eg. NFS, CIFS, ZFS, etc., by saying that this is largely academic. I want to compare the performance that a virtualized file server has that receives it's content being served by both NFS and iSCSI, and the NFS part is easy.
    Thanks.

    Mass storage only looks at the memory expansion.
    Did you have a micro SD card in it?
    What OS on the PC are you running?
    Click here to Backup the data on your BlackBerry Device! It's important, and FREE!
    Click "Accept as Solution" if your problem is solved. To give thanks, click thumbs up
    Click to search the Knowledge Base at BTSC and click to Read The Fabulous Manuals
    BESAdmin's, please make a signature with your BES environment info.
    SIM Free BlackBerry Unlocking FAQ
    Follow me on Twitter @knottyrope
    Want to thank me? Buy my KnottyRope App here
    BES 12 and BES 5.0.4 with Exchange 2010 and SQL 2012 Hyper V

  • What is the minimum required for 10g RAC AIX 5.2L on a SAN

    We have the following
    2 IBM p550's with AIX 5.2L
    Oracle 10g
    Hitachi SAN
    The questions I have are
    1. Does one need any addtional software such as HACMP ?
    2. Can one use just RAW 'disks' and not use ASM ?
    3. Can one using just the raw 'disks' presented by the SAN with ASM ?
    Anything else ?
    Thank you,
    -pete

    Oracle says that from 10g onwards, not need to have any other cluster software since it is providing CRS (cluster ready services) with 10g. However, having HACMP installed would be a good idea. I heard many problem, earliy release of 10g, aboug just using CRS instead of other clustered services.
    Well, we are using SHARQ d240 storage, but, didn't required any raw partition, and also not using ASM. Basically, we are in a testing mode.
    Jaffar

Maybe you are looking for

  • G/L ACCOUNT ERROR  URGENT

    Iam getting the error is PERIOD 012/2007 IS NOT OPEND FOR ACCOUNT TYPE S AND G/L 799999. Please solve me problem urgent.

  • Display Standard Text in ALV

    Dear Experts, I have a requirement where the user wants me to display the standard text in SO10 using ALV. The text must be displayed exactly like how the user maintaned in SO10. What is the best way? The user do not want the solution to have a butto

  • Anyone know Flash CS3 embed font's Glyphs Range!! with font class(no TextField )

    I have add some font by Font class in the library (TextFeild.embedFonts=true) but the font's glyphs looks not enough. when i use text String " ü ä ö ß π à â ç è é ê ë î ï ô ù û œ " only the "π" and "œ" is can be visible. Font.hasGlyphs also return fa

  • There is no Safari for me! ;-(

    I want to upgrade my 5.o.3 to something newer cause its been slow and hanging recently but the only update now left is 5.1.2 which wont work for me. there is 5.o.6 but funny it wont work either, though 5.o.3 somehow did get installed.  its says it ne

  • Pricing Conditions Class CL_COND_DATACONTAINER

    Hi, Has anyone come across this class for maintaining pricing conditions. What are the inputs we are supposed to provide so that we create a condition record,update or delete a condition record. Sample code will also be help ful Any help will be appr