IPCC - Relation between CSQ and Agent

Hello,
I'm trying to develop a report where I'll place metrics per CSQ and, inside that, per Agent.
For example, I want to retrieve the login time of agent X in CSQ Y; the number of calls handled by agent A in CSQ B.
Is that possible? right now I've only collected metrics by CSQ OR by Agent. I could not find the cross between this two elements.
Can anyone help me? Where can I find that relation in the db_cra database?
Thank you!
Regards,
Filipe - Portugal

Hi Christina,
In newer versions of SAP, SAPHostExec
saphostexec is an executable that runs under root (UNIX) or the Local System Account (Microsoft Windows). It controls all of the functions for which a special user of this type are required, such as the operating system collector saposcol and sapacosprep. It is connected to sapstartsrv in host mode using a local socket, which ensures quick and secure communication and is also started during the startup of the host.
Refer SAP help Monitoring Hosts with SAPHostControl and saphostexec - Infrastructure of the SAP NetWeaver Management Agents - SAP Libra…
When you check the status of saphostexec using the command
/usr/sap/hostctrl/exe/saphostexec -status
You will see below processes
saphostexec running (pid = 4729)
sapstartsrv running (pid = 4732)
saposcol running (pid = 0)
I guess the relation is clear for you.
Hope this helps.
Regards,
Deepak Kori

Similar Messages

  • Relation between sapstartsrv and saphostexec

    I am trying to implement note 1826767 which says:
    Resolution
    Configure https for the sapstartsrv process. The necessary steps are described in the chapter 'Configuring HTTPS for the SAP Host Agent' in the SAP HANA Automated update guide.
    My question is what is the relation between sapstartsrv and saphostexec here, i.e. why configuring HTTPS  saphostexec is equivalent to configuring HTTPS for sapstartsrv?   sapstartsrv and saphostexec are 2 different executables.
    Thanks!

    Hi Christina,
    In newer versions of SAP, SAPHostExec
    saphostexec is an executable that runs under root (UNIX) or the Local System Account (Microsoft Windows). It controls all of the functions for which a special user of this type are required, such as the operating system collector saposcol and sapacosprep. It is connected to sapstartsrv in host mode using a local socket, which ensures quick and secure communication and is also started during the startup of the host.
    Refer SAP help Monitoring Hosts with SAPHostControl and saphostexec - Infrastructure of the SAP NetWeaver Management Agents - SAP Libra…
    When you check the status of saphostexec using the command
    /usr/sap/hostctrl/exe/saphostexec -status
    You will see below processes
    saphostexec running (pid = 4729)
    sapstartsrv running (pid = 4732)
    saposcol running (pid = 0)
    I guess the relation is clear for you.
    Hope this helps.
    Regards,
    Deepak Kori

  • Table relations between vbrk and bkpf for  Accounting Document Number

    hello,
    i am using 4 tables to get data into my programs.
    vbrk,vbrp konv and bkpf.
    i want to get belnr from bkpf.i found relation between vbrk and belnr.but in vbrk table belnr's value is initial.
    can anybody tell me that how should i relate vbrk and bkpf or how to get Accounting Document Number(belnr) from bkpf for Billing Document(vbeln).
    regards,
    soniya s.

    hi,
    chekc this. its working for me.
    data : WA_AWKEY LIKE BKPF-AWKEY.
    data :  WA_BELNR LIKE BKPF-BELNR.
    data : LENGTH TYPE I.
    *BREAK MTABAP.
    LENGTH = STRLEN( IT_VBRK-VBELN ).
    if  LENGTH = '10' .
    MOVE it_vbrk-VBELN TO WA_AWKEY.
    SELECT SINGLE BELNR FROM BKPF INTO WA_BELNR
      WHERE AWKEY = WA_AWKEY
      AND AWTYP = 'VBRK'
      and blart = 'RV'.
    it_final-acc_doc = WA_BELNR.
      CLEAR WA_BELNR .
      CLEAR WA_AWKEY .
    else.
    CONCATENATE '0' it_vbrk-vbeln INTO wa_awkey.
    SELECT SINGLE BELNR FROM BKPF INTO WA_BELNR
      WHERE AWKEY = WA_AWKEY
      AND AWTYP = 'VBRK'
      and blart = 'RV'.
    it_final-acc_doc = WA_BELNR .
      CLEAR WA_BELNR .
      CLEAR WA_AWKEY.
    endif.

  • How to make relation between  gl_je_lines  and wip_transaction_accounts

    I try to write script to make relation between GL and WIP, wip_transaction_accounts and mtl_material_transactions
    1.
    SELECT
    (SELECT meaning
    FROM mfg_lookups
    WHERE lookup_type = 'WIP_TRANSACTION_TYPE'
    AND lookup_code =
    (SELECT TRANSACTION_TYPE
    FROM wip_transactions
    WHERE transaction_id = wta.transaction_id
    ) "Transaction Type" ,
    gjb.NAME "Journal Batch Name" ,
    gjh.NAME "Journal Name" ,
    gjh.je_source "JE Source" ,
    gjh.je_category "JE Category" ,
    glp.period_num "GL Month" ,
    glp.period_year "GL Year" ,
    gjh.default_effective_date "GL Date" ,
    NVL(gje.accounted_dr,0) -NVL( gje.accounted_cr,0) "GL Line Amount" ,
    gje.description "GL Line Description" ,
    (SELECT organization_code
    FROM mtl_parameters
    WHERE organization_id = wta.organization_id
    ) "ORG Name" ,
    NULL "Vendor/Customer Name" ,
    NULL "SO/PO Number" ,
    NULL "Reference Number" ,
    NULL "AP/AR Invoice Number" ,
    NULL "Doc Sequence Value" ,
    NULL "Invoice Type Lookup Code" ,
    NULL "Check Number" ,
    NULL "Line Type" ,
    NULL "Category" ,
    NVL (wta.base_transaction_value, 0) "Transaction Amt" ,
    NULL "AR/PO Receipt Number" ,
    NULL "Applied Invoice Number" ,
    (SELECT a.segment1
    FROM mtl_system_items_b a ,
    wip_discrete_jobs b
    WHERE a.inventory_item_id = b.primary_item_id
    AND a.organization_id = b.organization_id
    AND b.wip_entity_id = wta.wip_entity_id
    AND b.organization_id = wta.organization_id
    ) "Item Name" ,
    gcc.segment1 "Company" ,
    gcc.segment2 "Department" ,
    gcc.segment3 "Account" ,
    (SELECT description
    FROM fnd_flex_values_vl fnd
    WHERE flex_value_set_id = 1009707
    AND TO_CHAR (fnd.flex_value) = gcc.segment3
    ) "Account Description" ,
    gcc.segment4 "Intercompany" ,
    NULL "JE Category Description" ,
    NULL "AP Invoice Line Description"
    TRUNC (wta.transaction_date) "JE Creation/Inv Trans Date" ,
    NULL "JE Created By" ,
    NULL "Reversal Flag" ,
    NULL "Reason Code" ,
    NULL "Subinventory Name" ,
    wta.primary_quantity "Quantity" ,
    NVL (wta.base_transaction_value, 0) "Value" ,
    gje.context ,
    gje.attribute1 ,
    gje.attribute2 ,
    gje.attribute3 ,
    gje.attribute4 ,
    gje.attribute5 ,
    DECODE(gjh.status,'P','Posted','U','Unposted','Error') "Post Status"
    FROM gl_je_lines gje ,
    gl_je_headers gjh ,
    gl_je_batches gjb ,
    wip_transaction_accounts wta ,
    gl_code_combinations gcc ,
    gl_periods glp
    WHERE 1 =1
    AND gje.je_header_id = gjh.je_header_id
    AND gjh.je_batch_id = gjb.je_batch_id
    AND UPPER (gjh.je_source) = 'INVENTORY'
    AND gcc.code_combination_id = gje.code_combination_id
    AND wta.reference_account = gje.code_combination_id
    AND TRUNC(wta.transaction_date) BETWEEN glp.START_DATE AND glp.END_DATE
    AND gje.reference_1 = wta.gl_batch_id
    AND gje.gl_sl_link_table = 'WTA'
    AND glp.period_name = gjh.period_name
    AND glp.period_set_name = 'OVT_US_CAL'
    ==
    2.
    SELECT mtt.transaction_type_name "Transaction Type" ,to_char(mmt.transaction_date,'mm/dd/yyyy hh:mi:ss') as teas, mta.transaction_id,mta.organization_id, mmt.organization_id,
    gjb.NAME "Journal Batch Name" ,
    gjh.NAME "Journal Name" ,
    gjh.je_source "JE Source" ,
    gjh.je_category "JE Category" ,
    glp.period_num "GL Month" ,
    glp.period_year "GL Year" ,
    gjh.default_effective_date "GL Date" ,
    NVL(gje.accounted_dr,0) - NVL(gje.accounted_cr,0) "GL Line Amount" ,
    gje.description "GL Line Description" ,
    (SELECT organization_code
    FROM mtl_parameters
    WHERE organization_id = mta.organization_id
    ) "ORG Name" ,
    NULL "Vendor/Customer Name" ,
    NULL "SO/PO Number" ,
    mmt.TRANSACTION_REFERENCE "Reference Number" ,
    NULL "AP/AR Invoice Number" ,
    NULL "Doc Sequence Value" ,
    NULL "Invoice Type Lookup Code" ,
    NULL "Check Number" ,
    NULL "Line Type" ,
    NULL "Category" ,
    NVL (mta.base_transaction_value, 0) "Transaction Amt" ,
    NULL "AR/PO Receipt Number" ,
    NULL "Applied Invoice Number" ,
    (SELECT segment1
    FROM mtl_system_items_b
    WHERE inventory_item_id = mmt.inventory_item_id
    AND organization_id = mmt.organization_id
    ) "Item Name" ,
    gcc.segment1 "Company" ,
    gcc.segment2 "Department" ,
    gcc.segment3 "Account" ,
    (SELECT description
    FROM fnd_flex_values_vl fnd
    WHERE flex_value_set_id = 1009707
    AND TO_CHAR (fnd.flex_value) = gcc.segment3
    ) "Account Description" ,
    gcc.segment4 "Intercompany" ,
    NULL "JE Category Description" ,
    NULL "AP Invoice Line Description"
    TRUNC (mta.transaction_date) "JE Creation/Inv Trans Date" ,
    NULL "JE Created By" ,
    NULL "Reversal Flag" ,
    (SELECT MGD.SEGMENT1
    FROM MTL_GENERIC_DISPOSITIONS MGD,
    MTL_MATERIAL_TRANSACTIONS MMTT
    WHERE MGD.DISPOSITION_ID = MMTT.TRANSACTION_SOURCE_ID
    AND MMTT.TRANSACTION_SOURCE_TYPE_ID = 6
    AND MGD.ORGANIZATION_ID = MMTT.ORGANIZATION_ID
    AND MMTT.TRANSACTION_ID = MMT.TRANSACTION_ID
    ) "Reason Code" ,
    mmt.SUBINVENTORY_CODE "Subinventory Name" ,
    mta.primary_quantity "Quantity" ,
    NVL (mta.base_transaction_value, 0) "Value" ,
    gje.context ,
    gje.attribute1 ,
    gje.attribute2 ,
    gje.attribute3 ,
    gje.attribute4 ,
    gje.attribute5 ,
    DECODE(gjh.status,'P','Posted','U','Unposted','Error') "Post Status" -
    FROM gl_je_lines gje ,
    gl_je_headers gjh ,
    gl_je_batches gjb ,
    mtl_transaction_accounts mta ,
    mtl_material_transactions mmt ,
    mtl_transaction_types mtt ,
    gl_code_combinations gcc ,
    gl_periods glp
    WHERE 1 =1
    AND mta.transaction_id = mmt.transaction_id
    AND gje.je_header_id = gjh.je_header_id
    AND gjh.je_batch_id = gjb.je_batch_id
    AND UPPER (gjh.je_source) = 'INVENTORY'
    AND gcc.code_combination_id = gje.code_combination_id
    AND mta.reference_account = gje.code_combination_id
    AND TRUNC(mta.transaction_date) BETWEEN glp.START_DATE AND glp.END_DATE
    and mta.request_id=mmt.request_id
    and mta.inventory_item_id=mmt.inventory_item_id
    AND gje.reference_1 = mta.gl_batch_id
    AND gje.gl_sl_link_table = 'MTA'
    AND mtt.transaction_type_id = mmt.transaction_type_id
    AND glp.period_name = gjh.period_name
    AND glp.period_set_name = 'OVT_US_CAL'
    When gl_je_lines have multiple line this script on the same transaction_id it will get wrong information. By setup all wip_transaction_accounts.GL_SL_Lind_id and mtl_transaction_accounts.GL_SL_Lind_id is null
    Anyone can help?

    Hello.
    How are you reaching the inconsistencies ? Are you comparing report's results?
    Octavio

  • Database table/FM to find relation between plant and company code in SRM

    Hi,
    I have a requirement where I need to determine the relation between the plant and company code in SRM.
    The plant and company code both belong to R/3. We are replicating the plant by using the report BBP_LOCATIONS_GET_ALL.
    This plant is now available as a BP in SRM. But from which SRM table or using which SRM FM can I get the relation between this and the company code?
    Thanks,
    Srivatsan

    Hi
    <b>Check table BBP_LOCMAP. (Plants) -  Manage Table Business Partner   > System   > Location
    BBP_COMPCODE_FAV               User-Specific Favorites for Permitted Company code.</b>
    Hope this will help.
    Please reward suitable points, incase it suits your requirements.
    Regards
    - Atul

  • Relation between Rollup and compression.

    Hi All,
    Is there a relation between rollup and compression ? i.e if i compress the cube, will the rollup job be faster ?

    Hi,
    Thanks for all your replies. Now the picture is crystal clear.
    1. A compression job moves data from F table to E table. This is also applicable for aggregates. If you schedule a compression job for a cube, the aggregates are automatically compressed( You can also compress aggregates of a cube without compressing the cube. There are programs avalibale for the same).
    2. Assume that you have not done compression . If you load the data to base cube ,it takes more time to create the indexes after loading.
    3) Assume that you have done compression . If you load the data to base cube ,it takes less time than 2 point to create the indexes after loading.
    On similar lines, if aggregates of a cube  are compressed before rollup, then rollup will be faster. This is what exactly I have experienced. I had a cube which had 1000 request. It was never compressed. The rollup job for the cube used to take around 60,000 seconds. I compressed 900 requests. Now the rollup job gets over in 2400 seconds.So the conclusion is the following:
    <b>When aggregates of a cube are compressed, the rollup job runs faster.</b>
    Message was edited by: Tej Trivedi

  • Table that gives the relation between plant and company code

    Hi gurus,
    I actually have plant number and using this plant value i need to get the company code.
    Is there any table that gives the relation between plant and company code. So,that i can get the company code details.
    Thanks in advance.

    Hi Bhanuphani,
    Use  T001K  where BWKEY is the plant
    Reward if useful
    Thanks Arjun

  • Is any relation between customerNo and Plant in R/3

    hi,
    <b>in bapi_salesorder_cretefromdat2 i can get netprice by giving import parameters like customerNo,salesArea,material, etc.......That means customer should be assinged to a plant(one sales area can have many plants( material price can vary from plant to plant) .pls give me clear picture on how net price is calculated in bapi_salesorder_createfromdat2</b>.
    regards
    Guru

    There is a relation between customer and plant in the KNA1 General Data in Customer Master table.
    kna1-kunnr
    kna1-werks.
    Kindly reward the points

  • Is There any Relation Between PCD and SSO

    Hi,
      Is there any relation between PCD and SSO, why because if i am try to connect from portal to R/3 in the user mapping i got one error i.e some pcd error.
    pls tell me how to rectify that.
    Regards,
    Jagadish Babu Kanikanti.

    Hi Jagadish
    The PCD is only a central persistance for storing Portal objects. There is no such relation between PCD and SSO that could create a problem. You can however check for the PCD error in this manner.
    1. In the system administration role, choose System Administration -> Support. The Support Desk appears.
    2. Select the area Portal Content Directory.
    3. Click on PCD Configuration in the test and configuration tools. The next screen shows all the parameter values currently maintained for the PCD.
    4. To reload the configuration, choose Reload.
    Hope that helped.
    Best Regards
    Priya

  • Can any one explain me the relation between BDC and reports events?

    hi experts.....
    can any one explain me the relation between BDC and reports events? we are using report events in BDC programmes why?\
    Is reports events occurs in each and every concept in ABAP i.e creating custom idocs, smart forms, sap scripts, dialog programmes, module pool technics?
    thanks in advance

    The forums are expert forums. So the first thing I would do is change your name.
    It's like entering a grand prix in a car with a "Student Driver" sign.
    Rob

  • Urgent :Relation between vendor and business area

    < MODERATOR:  Message locked.  Please read the [Rules of Engagement|https://www.sdn.sap.com/irj/sdn/wiki?path=/display/home/rulesofEngagement] before posting next time. >
    Hi ,
    Is there any table or transaction which will give an one-to-one relation between vendor and business area?
    Thnks and Regards,
    Anand

    hi
    in this situation u have to creat new  Z table and Z transaction with link of this both . it will solve ur problem . better to take abaper help other wise u can use T-CODE se11 to creat table.
    se93 for transaction code .

  • How to get data from three tables (A,B,C) having one to many relation between A and B .and having one to many reation between b and c

    i have  three tables A,B,C.  there is one to many relation between A and B. and one to many relation existed between table b and c . how will get data from these three tables

    check if this helps:
    select * --you can always frame your column set
    from tableA a
    left join tableB b on a.aid=b.aid
    left join tableC c on c.bid=b.bid
    This is just a general query. However, we can help you a lot more, if you can post the DDL + sample data and required output.
    Thanks,
    Jay
    <If the post was helpful mark as 'Helpful' and if the post answered your query, mark as 'Answered'>

  • What are the Relations between Journalizing and IKM?

    What is the best method to use in the following scenario:
    I have about 20 source tables with large amount of data.
    I need to create interfaces that join the source tables into target tables.
    The source tables are inserted every few secondes with about hundreds to thousands rows.
    There can be a gap of few seconds between the insert of different tables that sould be joined.
    The source and target tables are on the same Oracle instance and schema.
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?
    In general What are the relations between 'Journalizing' and 'IKM'?
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?
    I want to understand what is the role of 'Journalizing CDC'?
    Can 'IKM - Incremental Update' work without 'Journalizing'?
    Does 'Journalizing' need to have PK on the tables?
    What should i do if i can't put PK (there can be multiple identical rows)?
    Thanks in advance Yael

    Hi Yael,
    I will try and answer as many of your points as I can in one post :-)
    Journalizing is way of tracking only changed data in your source system, if your source tables had a date_modified you could always use this as a filter when scanning for changes rather than CDC, Log based CDC (Asynchronous in ODI, Logminer/Streams or Goldengate for example) removes the overhead of of placing a trigger on the source table to track changes but be aware that it doesnt fully remove the need to scan the source tables, in answer to you question about Primary keys, Oracle CDC with ODI will create an unconditional log group on the columns that you have defined in ODI as your PK, the PK columns are tracked by the database and presented in a Journal table (J$<source_table_name>) this Journal table is joined back to source table via a journalizing view (JV$<source_table_name>) to get the rest of the row (ie none PK columns) - So be aware that when ODI comes around to get all data in the Journalizing view (ie Inserts, Updates and Deletes) the source database performs a join back to the source table. You can negate this by specifying ALL source table columns in your PK in ODI - This forces all columns into the unconditional log group, the journal table etc. - You will need to tweak the JKM to then change the syntax sent to the database when starting the journal - I have done this in the past, using a flexfield in the datastore to toggle 'Full Column' / 'Primary Key Cols' to go into the JKM set up (there are a few Ebusiness suite tables with no primary key so we had to do this) - The only problem with this approach is that with no PK , you need to make sure you only get the 'last' update and in the right order to apply to your target tables, without so , you might process the update before the insert for example, and be out of sync.
    So JKM's provide a mechanism for 'Change data only' to be provided to ODI, if you want to handle deletes in your source table CDC is usefull (otherwise you dont capture the delete with a normal LKM / IKM set up)
    IKM Incremental update can be used with or without JKM's, its for integrating data into your target table, typically it will do a NOT EXISTS or a Minus when loading the integration table (I$<target_table_name>) to ensure you only get 'Changed' rows on the load into the target.
    user604062 wrote:
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?Hopefully I have explained it above, its the type of thing you really need to play around with, and throroughly review the operator logs to see what is actually going on (I think this is a very good guide to setting it up : http://soainfrastructure.blogspot.ie/2009/02/setting-up-oracle-data-integrator-odi.html)
    In general What are the relations between 'Journalizing' and 'IKM'?JKM simply presents (only) changed data to ODI, it removes the need for you to decide 'how' to get the updates and removes the need for costly scans on the source table (full source to target table comparisons, scanning for updates based on last update date etc)
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?Delete and insert into target is fine , but ask yourself how do you identify which rows to process, inserts and updates are generally OK , to spot a delete you need to compare the table in full, target table minus source table = deleted rows , do you want to copy the whole source table every time to perform this ? Are they in the same database ?
    I want to understand what is the role of 'Journalizing CDC'?Its the ODI mechanism for configuring, starting, stopping the change data capture process in the source systems , there are different KM's for seperate technologies and a few to choose for Oracle (Triggers (Synchronous), Streams / Logminer (Asynchronous), Goldengate etc)
    Can 'IKM - Incremental Update' work without 'Journalizing'?Yes of course, Without CDC your process would look something like :
    Source target ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    With CDC your process looks like :
    Source Journal (J$ table with JV$ view) ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    as you can see its the same process after the source table (there is an option in the interface to enable the J$ source , the IKM step changes with CDC as you can use 'Synchronise Journal Deletes'
    Does 'Journalizing' need to have PK on the tables?Yes - at least a logical PK in the datastore, see my reply at the top for reasons why (Log Groups, joining back the J$ table to the source table etc)
    What should i do if i can't put PK (there can be multiple identical rows)? Either talk to the source system people about adding one, or be prepared to change the JKM (and maybe LKM, IKM's) , you can try putting all columns in the PK in ODI. Ask yourself this , if you have 10 identical rows in your source and target tables, and one row gets updated - how can you identify which row in the target table to update ?
    >
    Thanks in advance YaelA lot to take in, as I advised I would reccomend you get a little test area set up and also read the Oracle database documentation on CDC as it covers a lot of the theory that ODI is simply implementing.
    Hope this helps!
    Alastair

  • What is the relation between KTOPL and 0CHRT_ACCTS_ATTR? both Chart of Acct

    Hi
    If an R3 field e.g. KTOPL (in datasource 0fi_gl_4)  which is a "Chart of Accout" is extracted a part of 0fi_gl_4 to BI,
    is there the need to separately extract 0CHRT_ACCTS_ATTR & 0CHRT_ACCTS_TEXT which are also "Chart of Accout"
    What is the relation between KTOPL and 0CHRT_ACCTS_ATTR & 0CHRT_ACCTS_TEXT  in R3?
    What is the relation between KTOPL and 0CHRT_ACCTS_ATTR & 0CHRT_ACCTS_TEXT  in BI?
    Thx

    Hi,
    I get these well with examples so let me see if I get your point:
    So in the tranx data the 0fg_gl will be  a value for KTOPL  e.g. 450009 as a G/L Account number? (Values for chart of Acct should be GL Accounts, right?)
    but before this tranx data is loaded, preferrably,  0CHRT_ACCTS_ATTR and 0CHRT_ACCTS_TEXT should have been extracted to BI; and data loaded; and this will be the master data and there should have been in it:
    .._ATTR   -
    .._TEXT
    450009   -
    Petty cash Expenses
    Is my understanding exact?
    Thnx

  • What is the relation between bw and netweaver

    Hi all
    what is the relation between bw and netweaver? any one give answer for this

    Hi
    SAP NetWeaver
    The SAP NetWeaver technology platform is the open integration and
    application platform that reduces total cost of ownership (TCO) across the
    entire IT landscape.
    SAP NetWeaver integrates and aligns people, information, and business
    processes across technologies and organizations
    And BW is a part of Composite Application framework for Information Integration
    This framework also contains Other Components such
    o     Enterprise Portals (EP)
    o     Exchange Infrastructure (XI)
    o     Master data management (MDM)
    o     Solution manager
    o     Web Application server (WAS)
    o     xApps
    Hope this solves ur question.as if any further doubts
    Sonal...

Maybe you are looking for

  • Apple ID for Children

    Can anyone help. I have just upgraded the kids ipads to iOS 7..and imessages have stopped working. So I google search and find that I now have to have apple ID's for both kids. I goto My Apple ID and try creating an account for my son born 2004. I ge

  • Need help in creating a view with Encryption for hiding the code used by the multiple users

    Hi, Can anyone help me out in creating view with encryption data to hide the stored procedure logic with other users. I have create a stored procedure with encryted view but while running this manually temporary views are getting created, therefore t

  • How do we watch an HD movie on our macbook?

    This is the message we get when we try to watch a movie that we rented from itunes: This movie can be played only on displays that support HDCP (High-bandwidth Digital Content Protection).  Try moving the iTunes window to a different display.

  • VMware and Siebel in Production – We have a first customer!

    Hi All, Wanted to pass this along since it may be a first. When we checked just over a year ago neither Siebel nor VMware could produce a single Siebel customer running VMware in production. We knew of a few VMware web servers in production and a few

  • Photo Streaming stopped at photo 523

    Hi guys, I just purchased an AppleTV and apparently everything's working just fine, except for photo streaming. I updated to the latest AppleTV software and I was very happy to see my streaming photos on iCloud appearing on my flatscreen. But then af