Advantages of Shared Storage in SOA Cluster

HI,
Enterprise deployment guide ( http://download.oracle.com/docs/cd/E15523_01/core.1111/e12036/toc.htm) , installing binaries in shared storage.
We have NAS as shared storage.
My Question is what are the advantages/disadvantages of installing binaries in shared storage?
One advantage i know as mentioned in the guide is that, we can create multiple soa servers from single installation.
Thanks
Manish

It has always been my understanding that the shared storage is prerequsite, not a recommendation, meaning if you want a cluster configuration you must have shared storage. I have done a quick look through the EDG can't can see any reference installing binaries on a non-shared storage.
I'm not 100% on this but I don't believe the WLS and SOA home are used during run time. The run time files used are in the managed server location, e.g. user_projects. By default this sit in the WLS home.
Also don't know much about shared storage, e.g. NAS over SAN but if you already have NAS, this seems to be the logical choice.
cheers
James

Similar Messages

  • Shared Storage ?!

    Hi all,
    We will have a SOA environment in a cluster with two machines in active-active configuration.
    Must I have a Shared Storage for this configuration ? I undestood that it is needed when I have Active-Passive configuration but I think I migth be wrong.
    Could you gurus, clarify me about shared storage ?
    Thanks in advance.
    Regards
    Luiz Philip

    Luiz,
    You might have got the answers till now but in case you haven't then -
    So for these weblogic's features there's no other way to accomplish it without using a shared storage ?yes, shared storage is required for these features.
    Getting back with redundant binary installation, having just one single binary installation to create multiple SOA servers wouldn't lead me to addtional network traffic ? When these SOA Servers are created, will they have runtime files local too? Or Will they always use the files at shared storage while running ?Not really. All the servers will deploy the applications during start-up and at runtime those deployments will be used for processing. So it is one time class loading and after that loaded classes are used for processing.
    You may also like to refer -
    Advantages of Shared Storage in SOA Cluster
    http://www.oracle.com/technetwork/database/features/availability/fusion-middleware-maa-155387.html
    Regards,
    Anuj

  • DFSr supported cluster configurations - replication between shared storage

    I have a very specific configuration for DFSr that appears to be suffering severe performance issues when hosted on a cluster, as part of a DFS replication group.
    My configuration:
    3 Physical machines (blades) within a physical quadrant.
    3 Physical machines (blades) hosted within a separate physical quadrant
    Both quadrants are extremely well connected, local, 10GBit/s fibre.
    There is local storage in each quadrant, no storage replication takes place.
    The 3 machines in the first quadrant are MS clustered with shared storage LUNs on a 3PAR filer.
    The 3 machines in the second quadrant are also clustered with shared storage, but on a separate 3PAR device.
    8 shared LUNs are presented to the cluster in the first quadrant, and an identical storage layout is connected in the second quadrant. Each LUN has an associated HAFS application associated with it which can fail-over onto any machine in the local cluster.
    DFS replication groups have been set up for each LUN and data is replicated from an "Active" cluster node entry point, to a "Passive" cluster node that provides no entry point to the data via DFSn and a Read-Only copy on it's shared cluster
    storage.
    For the sake of argument, assume that all HAFS application instances in the first quadrant are "Active" in a read/write configuration, and all "Passive" instances of the HAFS applications in the other quadrants are Read-Only.
    This guide: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i.aspx defines
    how to add a clustered service to a replication group. It clearly shows using "Shared storage" for the cluster, which is common sense otherwise there effectively is no application fail-over possible and removes the entire point of using a resilient
    cluster.
    This article: http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_061 defines the following:
    DFS Replication in Windows Server 2012 and Windows Server 2008 R2 includes the ability to add a failover cluster
    as a member of a replication group. The DFS Replication service on versions of Windows prior to Windows Server 2008 R2
    is not designed to coordinate with a failover cluster, and the service will not fail over to another node.
    It then goes on to state, quite incredibly: DFS Replication does not support replicating files on Cluster Shared Volumes.
    Stating quite simply that DFSr does not support Cluster Shared Volumes makes absolutely no sense at all after stating clusters
    are supported in replication groups and a technet guide is provided to setup and configure this configuration. What possible use is a clustered HAFS solution that has no shared storage between the clustered nodes - none at all.
    My question:  I need some clarification, is the text meant to read "between" Clustered
    Shared Volumes?
    The storage configuration must to be shared in order to form a clustered service in the first place. What
    we am seeing from experience is a serious degradation of
    performance when attempting to replicate / write data between two clusters running a HAFS configuration, in a DFS replication group.
    If for instance, as a test, local / logical storage is mounted to a physical machine the performance of a DFS replication group between the unshared, logical storage on the physical nodes is approaching 15k small files per minute on initial write and even high
    for file amendments. When replicating between two nodes in a cluster, with shared clustered storage the solution manages a weak 2,500 files per minute on initial write and only 260 files per minute when attempting to update data / amend files.
    By testing various configurations we have effectively ruled out the SAN, the storage, drivers, firmware, DFSr configuration, replication group configuration - the only factor left that makes any difference is replicating from shared clustered storage, to another
    shared clustered storage LUN.
    So in summary:
    Logical Volume ---> Logical Volume = Fast
    Logical Volume ---> Clustered Shared Volume = ??
    Clusted Shared Volume ---> Clustered Shared Volume = Pitifully slow
    Can anyone explain why this might be?
    The guidance in the article is in clear conflict with all other evidence provided around DFSr and clustering, however it seems to lean towards why we may be seeing a real issue with replication performance.
    Many thanks for your time and any help/replies that may be received.
    Paul

    Hello Shaon Shan,
    I am also having the same scenario at one of my customer place.
    We have two FileServers running on Hyper-V 2012 R2 as guest VM using Cluster Shared Volume.  Even the data partition drive also a part of CSV.
    It's really confusing whether the DFS replication on CSV are supported or not, then what would be consequence if using.
    In my knowledge we have some customers they are using Hyper-V 2008 R2 and DFS is configured and running fine on CSV since more than 4 years without any issue.
    Appreciate if you can please elaborate and explain in details about the limitations on using CSV.
    Thanks in advance,
    Abul

  • How the cluster works when shared storage disk is offline to the primary ??

    Hi All
    I have configured Cluster as below
    Number of nodes: 2
    Quorum devices: one Quorum server, shared disks
    Resource Group with HA-storage, Logical host name, Apache
    My cluster works fine when either the nodes looses connectivity or crashes but when I deny access for primary node ( on which HA storage is mounted ) to the shared disks.
    The Cluster didn’t failover the whole RG to other node.
    I tried to add the HAstorage disks to the quorum devices but it didn’t help
    Anyways i can't able to do any i/o on the HAstorage on the respective node
    NOTE:This is the same case even on Zone cluster
    Please guide me, below is the O/P of # cluster status command === Cluster Nodes ===
    --- Node Status ---
    Node Name Status
    sol10-1 Online
    sol10-2 Online
    === Cluster Transport Paths ===
    Endpoint1 Endpoint2 Status
    sol10-1:vfe0 sol10-2:vfe0 Path online
    --- Quorum Votes by Node (current status) ---
    Node Name Present Possible Status
    sol10-1 1 1 Online
    sol10-2 1 1 Online
    --- Quorum Votes by Device (current status) ---
    Device Name Present Possible Status
    d6 0 1 Offline
    server1 1 1 Online
    d7 1 1 Offline
    === Cluster Resource Groups ===
    Group Name Node Name Suspended State
    global sol10-1 No Online
    sol10-2 No Offline
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    global-data sol10-1 Online Online
    sol10-2 Offline Offline
    global-apache sol10-1 Online Online - LogicalHostname online.
    sol10-2 Offline Offline
    === Cluster DID Devices ===
    Device Instance Node Status
    /dev/did/rdsk/d6 sol10-1 Fail
    sol10-2      Ok
    /dev/did/rdsk/d7 sol10-1 Fail
    sol10-2 Ok
    Thanks in advance
    Sid

    not sure what you mean with "deny access" but could be reboot of path failures is disabled. This should
    enable that:
    # clnode set -p reboot_on_path_failure=enabled +
    HTH,
    jono

  • 10g RAC on varitas Cluster Software and Shared storage

    1. Install oracle binaries and patches (RAC install)
    2. Configure Cluster control interface (shared storage) for Oracle
    3. Create instances of oracle
    These are 3 things i am wondering how to handle, I did all these on Oracle Clusterware , but never on Veritas Cluster ware ...all these 3 steps are the same or different. In someone can help..

    How we can do this while using varitas cluster software
    1. Install oracle binaries and patches (RAC install)
    2. Configure Cluster control interface (shared storage) for Oracle
    3. Create instances of oracle
    If we install RDBMS 10.2.0.1 with standard installer it will catch the vcs and when we will be running dbca it will ask for RAC db option?
    what is Configure Cluster control interface (shared storage) for Oracle??

  • SQL2008R2 cluster w/ several Named Instances -- shared storage best practice?

    Planning four node SQL2008R2 cluster; three named instances.  Each named instance requires exclusive use of a drive letter on shared storage.  Does the names instance need all it's files (data, logs, tempdb) on that exclusive drive?  Or can
    it use a drive shared by all 3 instances.  E.g. U:\SQLBackup\<instance-name>\...
    Thanks,
    Bob
     

    You will need at least one drive for each instance + 1 one for cluster Quorum (unless you go for fileshare).
    My recommandation would be:
    Instance1
    E:\SQLDataFiles
    F:\SQLLogFiles
    G:\SQLTempFiles
    Instance2
    H:\SQLDataFiles
    I:\SQLLogFiles
    J:\SQLTempFiles
    And so on.  If you are considered that you might run out of drive letters you could make a single Drive letter pr. instance and then attach the 3 drives as mountpoints into this drive. That way you will save 2 letters pr. instance.
    As for just using one single drive pr. instance with all 3 kinds of files: Don't go there - the performance gain of splitting then into 3 drives as laid out above, is at least 50% in my experience. Remeber also to format the sql drives with NTFS blocksize
    of 64K
    Regards
    Rasmus Glibstrup, SQLGuy
    http://blog.sqlguy.dk

  • 10g RAC on varitas Cluster Software & Shared storage

    we are in process of making 10g RAC without using Oracle Clusterware , we will be using Varitas Cluster software and varitas shared storage , I am looking for some quick notes/article on setting up/Installing this RAC configuration.

    Step-By-Step Installation of 9i RAC on VERITAS STORAGE FOUNDATION (DBE/AC) and Solaris
    Doc ID: Note:254815.1
    These are the notes i was looking for, Question is Only the RDBMS version will be changes , all other setup will be same as mentioned in Notes, and DBA work will start from creating DBs, right?

  • Disk replication for Shared Storage in Weblogic server

    Hi,
    Why we need a disk replication in web-logic server for shared storage systems? What is the advantage of it and how this disk replication can be achieved in web-logic for the shared storage which contains the common configurations and software's which will be used by a pool of client machines? Please clarify.
    Thanks.

    Hi,
    I am not the middleware expert. However ACFS (Oracle Cloud File System) is a clustering filesystem, which also has the functionality for replication:
    http://www.oracle.com/technetwork/database/index-100339.html
    Maybe you also finde information on what you need on the MAA website: www.oracle.com/goto/maa
    Regards
    Sebastian

  • How to Create Shared Storage using VM-Server 2.1 Red Hat Enterprise Linux 5

    Thanks in advance.
    Describe in sequence how to create shared storage for a two guest/node Red Hat Linux Enterprise using Oracle 2.1 VM Server on Red Hat Linux Enterprise 5 using command line or appropriate interface.
    How to create Shared Storage using Oracle 2.1 VM Server?
    How to configure Network for two node cluster (oracle clusterware)?

    Hi Suresh Kumar,
    Oracle Application Server 10g Release 2, Patch Set 3 (10.1.2.3) is required to be fully certified on OEL 5.x or RHEL 5.x.
    Oracle Application Server 10g Release 2 10.1.2.0.0 or 10.1.2.0.1 versions are not supported with Oracle Enterprise Linux (OEL) 5.0 or Red Hat Enterprise Linux (RHEL) 5.0. It is recommended that version 10.1.2.0.2 be obtained and installed.
    Which implies Oracle AS 10.1.2.x is some what certified on RHEL 5.x
    I think it would be better if you get in touch with Oracle Support regarding this .
    Sorry , I am not aware of any document on migration from Sun Solaris to RH Linux 5.2 .
    Thanks,
    Sutirtha

  • Pointing existing RAC nodes to a fresh Shared Storage discarding old one

    Hi,
    I have a RAC Setup with the Primary Database on Oracle 10gR2.
    For this setup, there is a Physical Standby Database Setup (using DataGuard configuration) also with 30min delay.
    Assume that the "Shared Storage" of the Primary DB fails completely.
    In the above scenario, my plan is to refresh a "fresh" shared storage device using Physical Standby Database Setup and then "point" the RAC nodes to the new "Shared Storage".
    Is this possible?
    Simply put, how can I refresh the Primary database using the Standby Database?
    Please help with the utilities (RMAN, DatGuard, other non-Oracle products etc.) that can be used to do this.
    Regards

    Does following Shared Device configuration is fine for 10g RAC on Windows 2003?
    . 1 SCSI drive
    • Two PCI network adapters on each node in the cluster.
    • Storage cables to attach the shared storage device to all computers.
    regard.

  • Qmaster Compressor and Shared Storage

    I've got Qmaster working on my compressor encodes and everything is working fine. What I want to know is whether there is a way to cut out the extra copy of data for the encode. All of my machines in the cluster have the same shared storage mounted on the desktop, but still my jobs get copied first and then encoded.
    Is there anyway to configure qmaster, compressor and the cluster machines so they all use their local path for source files? And avoid the extra copy of the source files to the controller system?
    Thanks in advance.

    You want to make sure in Compressor, under Preferences, the Cluster Options field says Never Copy Source to Cluster. Assuming you have all of the correct permissions set and have the Shared Storage configured properly then it should work. "Should" being the key word.

  • Qmaster error: shared storage client timed out while subscribing to...

    Here's my Qmaster setup:
    computer 1: CONTROLLER, no nodes
    - 8TB RAID hooked up via Fiber
    - connected to the GigE network switch via a 6-port bond
    - cluster storage set to a path on the RAID
    computers 2, 3, 4, 5: RENDER NODES
    - each computer has a 2-port bonded connection with the GigE switch
    computer 6: Client, with FCS2 installed.
    - connected with a single GigE link
    I have set up this cluster primarily for command-line renders, and it works great. I submit command-line renders from the client computer, which get distributed and executed on each node. The command line renders specify a source file on the RAID, and a destination path on the RAID. Everything works great.
    I run into trouble when trying to use Compressor with this same setup. The files are on the RAID, and all my computers have an NFS automount that puts it in the /Volumes folder on each computer.
    I set up my Compressor job and submit it to the cluster. It submits sucessfully, and distributes the work. After a few seconds, each node gives me a timeout error:
    "Shared storage client timed out while subscribing to [computer1.local/path to cluster storage]"
    Is this a bandwidth issue? Command line renders work fine, I can render 16 simultaneous Quicktimes to the RAID over NFS. I don't see much network activity on any of the computers when it's trying to start the Compressor render, it's as if it's not even trying to connect.
    If I submit the SAME compressor job to a cluster with nodes ONLY on the controller computer, it renders fine. Clearly the networked nodes are having trouble connecting to the share for some reason.
    Does anybody have any ideas? I have tried almost everything to get this to work. Hooking up each node locally to the RAID is NOT an option unfortunately.

    WELL I DO NOW!
    Thanks. it's taken 6th months and several paid 'professionals' and then you come in here...swinging your minimalist genius. one line. one single line. and its done.
    if you are in london, lets lift a beer or five together.
    thank you sir. thankyou!

  • Shared storage client timed out error

    Hello everybody, please help
    I have been at this now for about 2 days and still can't find out the source of my issue using FCP to render a project through compressor to multiple macs using Qmaster.
    What is happening is that I start the render and it gets sent to the second computer, I can see the prosessor ramping up then after (+-) 30 seconds I get the error bellow, and the render fails.
    Here is my set up:
    MacBook1 as the cluster controller
    Macbook2 as the service node
    connected via a gigabyte switch using an ethernet cable
    The error I keep getting is this ("Macintosh-7" is the name of MacBook1, "chikako-komatsus-computer" is the name of MacBook2):
    3x HOST [chikako-komatsus-computer.local] Shared storage client timed out while subscribing to "nfs://Macintosh-7.local/Volumes/portable/Cluster_scratch/4AD40699-B5BD6A1A/sha red"
    The volume mentioned above in the error is a shared Fire wire drive connected to the MacBook1. It is have full read and write privileges to everyone. This drive is where the project file and all the source video are located. MacBook1 via the Qmaster system preferences is pointing to a folder "Cluster_scratch" on this drive.
    I have been mounting this drive from macBook2 using the connect to sever option in the finder under Go menu. This method seems to only enable me to connect to this drive using AFP, is this my problem?
    I have "allowed all incoming traffic" of the fire wall on the MacBook1
    What is funny (not really) is that i can Compress a previously compiled video with the cluster If I don't go thou Final Cut Pro!
    Any help with this would be greatly appreciated.
    Thanks

    I also administer a managed cluster with 6 machines, and have been using it successfully for almost a year now. But the only encoding that is submitted is directly through Compressor, never via FCP.
    With QMaster, it sees a Quickcluster and a managed cluster the same way. While they are set up differently, the principle is the same, QMaster only sees services.
    Exporting out of FCP to any cluster has always been slow. If you want to harness the power of distributed encoding, you could export a Quicktime reference file and take that into Compressor to be submitted to the cluster for encoding.

  • Problem of using OCFS2 as shared storage to install RAC 10g on VMware

    Hi, all
    I am installing a RAC 10g cluster with two linux nodes on VMware. I created a shared 5G disk for the two nodes as shared storage partition. By using OCFS2 tools, i formatted this shared storage partition and successfully auto mounted it on both nodes.
    Before installing, i use the command "runcluvfy.sh stage -pre crsinst -n node1,node2" to determine the installation prerequisites. Everything is ok except an error "Could not find a suitable set of interfaces for VIPs.". By searching the web, i found this error could be safely ignored.
    The OCFS2 works well on both nodes, i formatted the shared partition as ocfs2 file system and configure o2bc to auto start ocfs service. I mounted the shared disk on both nodes at /ocfs directory. By adding an entry into both nodes' /etc/fstab, this partition can be auto mounted at system boots. I could access files in shared partition on both nodes.
    My problem is that, when installing clusterware, at the stage "Specify Oracle Cluster Registry", I enter "/ocfs/OCRFILE" for Specify OCR Location and "/ocfs/OCRFILE_Mirror" for Specify OCR Mirror Location. But got an error as following:
    ----- Error Message ----
    The location /ocfs/OCRFILE, entered for the Oracle Cluster Registry(OCR) is not shared across all the nodes in the cluster. Specify a shared raw partition or cluster file system that is visible by the same name on all nodes of the cluster.
    ------ Error Message ---
    I don't know why the OUI can't recognize /ocfs as shared partition. On both nodes, using command "mounted.ocfs2 -f", i can get the result:
    Device FS Nodes
    /dev/sdb1 ocfs2 node1, node2
    What's the possible wrong? Any help is appreciated!
    Addition information:
    1) uname -r
    2.6.9-42.0.0.0.1.EL
    2) Permission of shared partition
    $ls -ld /ocfs/
    drwxrwxr-x 6 oracle dba 4096 Aug 3 18:22 /ocfs/

    Hello
    I am not sure how far this following solution is relevant to your problem (regardless when it was originally posted - may help someone who is reading this thread), here is what I faced and here is how I fixed it:
    I was setting up RAC using VMWare. I prepared rac1 [installed OS, configured disks, users, etc] and the made a copy of it as rac2. So far so good. When, as per the guide I was following for RAC configuration, I started OCFS2 configuration, faced the following error on RAC2 when I tried to mount the /dev/adb1:
    ===================================================
    [Root @ *rac2* ~] # mount - t ocfs2 - o datavolume, nointr / dev / sdb1 / ocfs
    ocfs2_hb_ctl: OCFS2 DIRECTORY corrupted WHILE reading uuid ocfs2_hb_ctl: OCFS2 DIRECTORY corrupted WHILE reading uuid
    mount.ocfs2: Error WHEN attempting TO run / sbin / ocfs2_hb_ctl: "Operation not permitted" mount.ocfs2: Error WHEN attempting TO run / sbin / ocfs2_hb_ctl: "Operation not permitted"
    ===================================================
    After a lot of "googling around", I finally bumped into a page, the kind person who posted the solution said [in my words below and more detailed ]:
    o shutdown both rac1 and rac2
    o in VMWare, "edit virtual machine settings" for rac1
    o remove the disk [make sure you drop the correct one]
    o recreate it and select *"allocate all disk space now"* [with same name and in the same directory where it was before]
    o start rac1 and login as *"root"* and *"fdisk /dev/sdb"* [or whichever is/was your disk where you r installing ocfs2]
    Once done, repeat the steps for configuring OCFS2. I was successfully able to mount the disk on both machines.
    All this problem was apparently caused by not choosing "allocate all disk space now" option while creating the disk to be used for OCFS2.
    If you still have any questions or problem, email me at [email protected] and I'll try to get back to you at my earliest.
    Good luck!
    Muhammad Amer
    [email protected]

  • RAC with OCFS2 shared storage

    Hi all
    I wont to create RAC env in oracle VM 2.2 (one server) , with lokal disk's which I used to create LVM for ocr in in guest:
    - two quest with Oracle enterprise linux 5
    - both have ocfs2 rpm instaled
    when I wont to create shared storage for ocr I configure cluster.conf
    - service o2cb configure -> all ok -> on both nodes
    - service o2cb enable -> ok -> on both nodes
    - then mkfs.ocfs2 in node1
    - mount -t ocfs2 in node1
    - mount -t ocfs2 in node 2:
    [root@lin2 ~]# mount -t ocfs2 /dev/sde1 /ocr
    mount.ocfs2: Transport endpoint is not connected while mounting /dev/sde1 on /ocr. Check 'dmesg' for more information on this error.
    Jun 27 22:57:23 lin2 kernel: (o2net,1454,0):o2net_connect_expired:1664 ERROR: no connection established with node 0 after 30.0 seconds, giving up and returning errors.
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_request_join:1036 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_try_to_join_domain:1210 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_join_domain:1488 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_register_domain:1754 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):ocfs2_dlm_init:2808 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):ocfs2_mount_volume:1447 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: ocfs2: Unmounting device (8,65) on (node 1)
    can You help me where I doing mistake
    thank You Brano

    Please find the answer in the below link
    http://wiki.oracle.com/page/Oracle+VM+Server+Configuration-usingOCFS2+in+a+group+of+VM+hosts+to+share+block+storage

Maybe you are looking for

  • Transformation issue, need to break the lines.

    Hi, i had requirement in which i need to read only headers in file and write it into single file. so i used transformation activity in which i mapped only header. But my issue is all the header records are written in single line. I need it in separat

  • Table for purchase order status

    Hello friends, There's a tab STATUS in purchase order (tcoe ME21N). I can't find a table which contains theses statuses. Anybody knows what's the table for purchase orders status ?? Regards, Bahia

  • What is best use of 1400 gb SGA (2 rac nodes 768gb each)

    currently using 11.2.0.3.0 on unix sun sever with 2 RAC nodes each 8 UltraSPARC-T1 cpus (came out in 2005) four threads each so oracle sees 32 CPUS very slow(1.2 gb).  Database is 4TB in size on regular SAN (10k speed). 8gb SGA. New boss wants to upd

  • Runtime Error for the Picking List Smartform

    Hi All, I am working on the smartform, used for GI Scrapping for 551 Movement type. The requirement is they need DEA number for the Plant. I have written code for getting the DEA number. Iacivated the form and I want to print it using the TCODE MB02.

  • Mail subject storage table

    Hi,     In one program sending mail.When I run background job ,job will created.     In another report I am displaying all background jobs in perticular date. Now I want that mail subject has to display correspong to perticular job.    Please tell me