Transformation deleted rollback

Dear all,
Unfortunately we deleted one big transformation in devmnt system, now we realized that we need it, so how do we recall  the deleted transformations.
It is saved under request number and its not yet released.
I will appreciate if anyone can give some inputs on this.
Thanks,
Rob

Hi,
If this transformation has been transported to other systems earlier, then one way of doing this is to ask your basis team to import back the transport request which got it moved to other system. I have done this in past and this can solve your problem provided it had been transported earlier.
Thanks,
Arminder Singh

Similar Messages

  • Transformation delete

    Hi All,
    In DEV system, I wanted to delete an Transformation so when I tried deleting it was under someone's transport request and hence new task will be created. But we did not wanted to save under that transport request and so we gave Cancel. After that the Transformation got deleted in the system but we are unable to capture the deleted Transformation in any requests.
    Please suggest us how to capture the deleted transformation or where it will be captured.
    Thanks,
    Dinesh

    Hi Prashanth,
    Thanks for reply.
    In the old Transport request, TRFN changes done earlier and which is collected already is shown. Please confirm whether my deletion activity will show any additional task for deletion or the old request will be replaced as transformation delete request so that if I transport this old request number it should delete the same transformation in test system.
    Plese suggest.
    Thanks,
    Dinesh

  • 'D' Value in UPDMOD field getting lost in Transformation,Delete not working

    Hello all,
    We are using this datasource 0PU_IS_PS_32 for Funds management, the datasource has UPDMOD field in it, which is basically the 0RECORDMODE. So when they delete the document on R/3 side we can pick up the delete.
    Now this is what is happening in the delta infopackage, I get the delat records in the delta infopackage and I see a "D" in the UPDMOD field, but that value of D is getting lost when i load the data from PSA to the DSO through transofrmation by using DTP.
    After the request is loaded in DSO, before activating the request I check in New Data table and the field does not have a value of "D" anymore so when I activate the request the system does not know that those records are "D" delete records and so the records are not getting deleted from DSO.
    Has anyone faced this before, how can I fix it. The transformation is just one to one mapping no routines or formulas of any kind.
    Thanks for your help in advance.

    Hi,
    It is mandatory that u map upmode to 0recordmode. In the rule group there will be 2 groups. One will be ur normal mapping and other should be used to map 0recordmode
    Regards,
    Raghavendra.

  • Transformation in combination with a search query

    Using DPS 6.3 I have merged an Active Directory and Domino Directory. This all seems to work fine. The only issue I still have is that for some reason the transformations are not applied when performing a search query. An example: In AD groups can be found by performing a search for the objectclass "group". In Domino the objectclass is called "dominoGroup". To make the objectclasses match I made a transformation that removes "domino" from the Domino objectclass. I expected that searching for the objectclass "group" would now result in both groups from AD and Domino. But this is not the case.
    Is there a solution for this?

    Sorry for the late response, but I have not been able to work on this issue for the last few weeks. But I appreciate when you can have a look. It's the first time I have ever worked with DPS. The space was insufficient to post the complete conf.ldif file. I have now taken out my configuration. When you need something else just let me know.
    dn: cn=DSP_DOMINO,cn=datasource pools
    cn: DSP_DOMINO
    dn: cn=DS_DOMINO,cn=DSP_DOMINO,cn=datasource pools
    ldapServer: cn=DS_DOMINO,cn=data sources
    cn: DS_DOMINO
    dn: cn=DSP_AD,cn=datasource pools
    cn: DSP_AD
    dn: cn=DS_AD,cn=DSP_AD,cn=datasource pools
    ldapServer: cn=DS_AD,cn=data sources
    cn: DS_AD
    dn: cn=DS_AD,cn=data sources
    clientCredentialsForwarding: noForwarding
    cn: DS_AD
    useTCPNoDelay: true
    enabled: true
    useV1ProxiedAuthControl: false
    readOnly: true
    dn: cn=DS_DOMINO,cn=data sources
    clientCredentialsForwarding: noForwarding
    cn: DS_DOMINO
    useTCPNoDelay: true
    enabled: true
    useV1ProxiedAuthControl: false
    readOnly: false
    dn: cn=root data view, cn=Data Views
    cn: root data view
    viewBase: ""
    dataSourcePool: defaultDataSourcePool
    viewExclusionBase: cn=proxy manager
    viewExclusionBase: dc=interaccess,dc=nl
    viewExclusionBase: ""
    enabled: true
    dn: cn=DV_INTERACCESS,cn=data views
    readOnly: true
    dataSourcePool: DSP_AD
    viewBase: dc=interaccess,dc=nl
    description: Inter Access - AD
    cn: DV_INTERACCESS
    viewExclusionBase: ou=domino,ou=inter access,dc=interaccess,dc=nl
    viewAlternateSearchBase: ""
    viewAlternateSearchBase: dc=nl
    dn: cn=DV_DOMINO,cn=data views
    readOnly: false
    dataSourcePool: DSP_DOMINO
    viewBase: ou=domino,ou=inter access,dc=interaccess,dc=nl
    cn: DV_DOMINO
    attributeMapping: streetaddress#officestreetaddress
    attributeMapping: assistant#secretary
    attributeMapping: company#companyname
    DNSyntaxAttribute: distinguishedname
    DNSyntaxAttribute: dominoaccessgroups
    DNSyntaxAttribute: creatorsname
    DNSyntaxAttribute: member
    DNSyntaxAttribute: modifiersname
    DNSyntaxAttribute: secretary
    dataSourceBase: o=hpwibm
    attributeRule: DV_DOMINO_mapping_add-attr_otherMobile
    attributeRule: DV_DOMINO_mapping_add-attr_distinguishedName
    attributeRule: DV_DOMINO_read_add-attr_displayName
    attributeRule: DV_DOMINO_read_attr-value-mapping_displayname
    attributeRule: DV_DOMINO_read_remove-attr-value_objectclass
    attributeRule: DV_DOMINO_mapping_attr-value-mapping_objectclass
    attributeRule: DV_DOMINO_read_attr-value-mapping_objectclass
    attributeRule: DV_DOMINO_mapping_remove-attr-value_objectclass
    viewAlternateSearchBase: dc=interaccess,dc=nl
    viewAlternateSearchBase: ou=inter access,dc=interaccess,dc=nl
    viewAlternateSearchBase: ""
    viewAlternateSearchBase: dc=nl
    dn: cn=DV_DOMINO_mapping_add-attr_otherMobile,cn=attribute rules
    model: mapping
    viewAttributeValue: ${mobile}
    attributeName: otherMobile
    transformation: add
    cn: DV_DOMINO_mapping_add-attr_otherMobile
    dn: cn=DV_DOMINO_mapping_add-attr_distinguishedName,cn=attribute rules
    model: mapping
    viewAttributeValue: ${dn}
    attributeName: distinguishedName
    transformation: add
    cn: DV_DOMINO_mapping_add-attr_distinguishedName
    dn: cn=DV_DOMINO_read_add-attr_displayName,cn=attribute rules
    model: virtual
    viewAttributeValue: \${cn}
    attributeName: displayName
    transformation: add
    cn: DV_DOMINO_read_add-attr_displayName
    dn: cn=DV_DOMINO_read_attr-value-mapping_displayname,cn=attribute rules
    model: virtual
    viewAttributeValue: ${cn}
    attributeName: displayname
    transformation: replace value
    internalAttributeValue: ${displayName}
    cn: DV_DOMINO_read_attr-value-mapping_displayname
    dn: cn=DV_DOMINO_read_remove-attr-value_objectclass,cn=attribute rules
    model: virtual
    attributeName: objectclass
    transformation: delete value
    internalAttributeValue: inetOrgPerson
    cn: DV_DOMINO_read_remove-attr-value_objectclass
    dn: cn=DV_DOMINO_mapping_attr-value-mapping_objectclass,cn=attribute rules
    model: mapping
    viewAttributeValue: user
    attributeName: objectclass
    transformation: replace value
    internalAttributeValue: dominoPerson
    cn: DV_DOMINO_mapping_attr-value-mapping_objectclass
    dn: cn=DV_DOMINO_read_attr-value-mapping_objectclass,cn=attribute rules
    model: mapping
    viewAttributeValue: group
    attributeName: objectclass
    transformation: replace value
    internalAttributeValue: dominoGroup
    cn: DV_DOMINO_read_attr-value-mapping_objectclass
    dn: cn=DV_DOMINO_mapping_remove-attr-value_objectclass,cn=attribute rules
    model: virtual
    attributeName: objectclass
    transformation: delete value
    internalAttributeValue: groupofnames
    cn: DV_DOMINO_mapping_remove-attr-value_objectclass
    dn: cn=CHND_INTERACCESS,cn=connection handlers
    dataViewPolicy: DATA_VIEW_LIST
    useDataViewAffinity: false
    enabled: true
    cn: CHND_INTERACCESS
    sslCriteria: false
    dataViewList: DV_INTERACCESS
    dataViewList: DV_DOMINO
    dn: cn=PL_REFERRALS,cn=policies
    searchMaxSizeLimit: -1
    searchFilterMinSubstring: -1
    cn: PL_REFERRALS
    referralPolicy: forward

  • OIM Transaction Mgmt/Rollback

    We have a requirement as follows:
    Service A calls OIM to create a user and in turn provision to target
    1. Service A triggers OIM User Create at that point, OIM Instead of committing the transaction triggers provisioning to target, it should provision first and then commit OIM user creation.
    Or. Create a user in OIM but don't respond to calling service yet, trigger provisioning to target on success respond to service as success, if provisioning fails, delete/rollback the user in OIM.
    How can either of case be implemented?
    please advise.

    As for initiating provisioning, OIM needs User Key and that will be generated only after creation of User In to OIM.
    It's difficult to write such queries which will Roll Back everything but it's not recommended as it is against Audit and Compliance but if you want then you can write only with Hard Work. How are you provisioning that Resource from Access Policy or through Event Handler.
    Try with putting Event Handler on Pre Insert which will trigger the Provisioning. I haven't tested it but you can give a try and let us know the results.

  • How to purge PERFSTAT.STATS$SQLTEXT table, after deleting snapshots

    I had an alarm on the free space of the PERFSTAT tablespace. In order to free up some space I tried to delete some old StatsPack snapshots, with the following query:
    DELETE from perfstat.stats$snapshot where snap_time < sysdate - 10 ;
    COMMIT;
    After running it the space usage on the tablespace was not significantly reduced. I checked again and I saw that the table PERFSTAT.STATS$SQLTEXT was very big, almost 8 Gb, but all the other tables were a lot smaller. I read somewhere that the sppurge.sql procedure, that comes with Oracle, could be used to purge the statspack data, but that it comes with the lines related to PERFSTAT.STATS$SQLTEXT commented, because it is a big query and it can take a lot of undo space. I tried to run the followiung query, which I found in the forum, by Don Burleson, but it failed, because of running out of undo:
    DELETE /*+ index_ffs(st)*/
    FROM perfstat.stats$sqltext st
    WHERE (hash_value, text_subset) NOT IN (
    SELECT /*+ hash_aj full(ss) no_expand*/ hash_value, text_subset
    FROM perfstat.stats$sql_summary ss
    WHERE snap_id NOT IN (SELECT DISTINCT snap_id FROM perfstat.stats$snapshot)
    COMMIT;
    Is there any way to know if the PERFSTAT.STATS$SQLTEXT table has records that could be purged, and an easier way, that don't use as much undo, to purge it?
    My oracle version is:
    Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production
    PL/SQL Release 9.2.0.7.0 - Production
    CORE 9.2.0.7.0 Production
    TNS for Solaris: Version 9.2.0.7.0 - Production
    NLSRTL Version 9.2.0.7.0 - Production
    I. Neva
    Oracle DBA

    Is there any way to know if the PERFSTAT.STATS$SQLTEXT table has records that could be purged, yes, just transform Delete to Select
    SELECT /*+ index_ffs(st)*/ count(*)
    FROM perfstat.stats$sqltext st
    WHERE (hash_value, text_subset) NOT IN (
    SELECT /*+ hash_aj full(ss) no_expand*/ hash_value, text_subset
    FROM perfstat.stats$sql_summary ss
    WHERE snap_id NOT IN (SELECT DISTINCT snap_id FROM perfstat.stats$snapshot)
    and an easier way, that don't use as much undo, to purge it?yes. do it smaller chunks
    BEGIN
    LOOP
    DELETE /*+ index_ffs(st)*/
    FROM perfstat.stats$sqltext st
    WHERE (hash_value, text_subset) NOT IN (
    SELECT /*+ hash_aj full(ss) no_expand*/ hash_value, text_subset
    FROM perfstat.stats$sql_summary ss
    WHERE snap_id NOT IN (SELECT DISTINCT snap_id FROM perfstat.stats$snapshot)
    ) AND ROWNUM<=10000;
    EXIT WHEN SQL%ROWCOUNT = 0;
    COMMIT;
    END LOOP;
    COMMIT;
    END;
    /

  • How to display warning message before deleting a record?

    Hi all
    I want to display a warning popup message( "Do you realy want to delete the record? Yes - No" ) before user delete a record. My page fragment contains simple <af:table> which display the employees data and operations buttons "commit - delete - rollback"
    I use Jdeveloper release 11.1.4
    Database : oracle 10g
    Thanks in advance

    Thank you so much for replaying
    I have another question related to creating popup
    I have page template and only one jspx page based on the template named"UIShell.jspx"
    I make a lot of page fragments , I have - untill now- about 15 page fragments.
    all these fragments are shown as dynamic regions within UIShell.jspx"
    My question is
    Should I create popup dialog window in each page fragment in order to display the messages?
    If the answer is "YES" this will not be a good approach - I think.
    Is there a way to create just one popup dialog and use it an any page fragments.
    Regards
    Edited by: ta**** on Apr 17, 2011 8:44 AM

  • Amount without currency

    Talking about transformation (BI7)
    Amount on source side (default currency, say EUR )
    Amount (with currency) on target side
    I get error while activating Transformation with demand to assign currency field from source to target. As I mentioned before I don't have any currency field in source. How can assign constant value (EUR) to the target of transformation?

    Hi,
    In the transformation delete the mapping between the source field nad the currency on the target side.Then manually assign the currency field by rt clickin on the target side,n clicking rule details.
    then try activating it.hope it will solve the problem.Does it solve the problem??
    Regards,
    Brinto.
    Edited by: Brinto Roy on Feb 5, 2008 6:28 PM

  • EMET 5.0 Installer Fails

    The EMET 5.0 installer kept failing on the StartServices action. I created a transform, deleted the action from the InstallExecute sequence, and it worked like a charm. I was able to start the service manually afterwards.

    This was more of an FYI. It works fine after manually starting it, but something about the StartServices action seems to be broken and causes a fatal error. I just installed EMET 4.0 without a hitch, but that would be expected since 5.0 is the first version
    to install a true service.

  • Signal 11 Error for EBS Alert Request :-

    Hello Everyone,
    One of the alert created in R12 Test Env is errored out with SIGNAL 11 error. After checking the concurrent request log, error has occured while executing the action set that calls action of simple DELETE, ROLLBACK statements. Below shows the error for the same.
    /u01/app/ebiz/gsiav/apps/apps_st/appl/alr/12.0.0/bin/ALECDC
    Program was terminated by signal 11
    Using the below metalink Note 456008.1, I tried relinking ALECDC executable and also applied Patch R12.ALR.A.DELTA.6 7237106 which does not work. Could any one please help me on this issue.
    Thanks
    Venu

    Pl check the cause for signal 11 listed in ML Doc 435226.1 and if that scenario does not apply to you, pl open an SR with Support for further troubleshooting (as you have already tried to relink ALECDC and applied the latest ALR patch).
    HTH
    Srini

  • Simsession short dump when calling BAPI

    Hello,
    I have created a macro in Demand Planning that calls BAPI_APO_AVAILABILITY_CHECK.
    When running this is SE37 or SE38, everything works fine. But when calling this from //SDP94 I got a shortdump, because the BAPI wanted to create a simsession, but there was already an existing simsession.
    Do you think, this is an error in the BAPI, or do i have to handle this in the calling FM?
    Would this be the right approach:
    FM "Z_TEST"
    CALL FUNCTION //RRP_SIMSESSION_GET         -
    Save SIMSESSION from DP
    CALL FUNCTION //RRP_SIMSESSION_LEAVE     -
    Clear Simsession
    CALL FUNCTION //BAPI_APO_AVAILABILITY_CHECK
    CALL FUNCTION //RRP_SIMSESSION_SET         -
    Set old SIMSESSION back
    I do not understand the purpose of the SIMSESSION, so I rely on your comments and help.
    Thanks for any feedback,
    BR,
    Dominik

    Hi Domink
    Simsessions are simulation sessions or work areas for transactional live cache data. You must create a simsession in order to retrieve and work with data from live cache. When finished you should close the simsession however most times the end of a LUW will delete the simsession. If data created/changed in a simsession needs to be saved an explicit LC commit may be required but in most cases that is taken care of within SAP code. In fact, the leave simsession fm will determine that data in the simsession has changed and ask if you want to save.
    I am not sure if you can get around the simsession creation problem in the ATP BAPI. If you had control over the code such as in a Zfunction module you could either not create the simsession or force a merge of simsessions. Do not code the leave simsession because that will delete (rollback) or force the save of the changed data from your planning book before you entered the macro - in addition when you return to your planning book from the macro there will not be a simsession for you to save or change data.
    Why are you running ATP from within a DP planning book? Are you looking for what stock is available to update another KF? Do you need to run through the entire ATP process to retrieve an available qty or can you use a simpler function module to retrieve the stock? Another option is to read the planning book in a stand alone program, do the ATP check and whatever logic is necessary before writing back to the planning book.
    Hope this helps
    Andy

  • How to merge XML-Data of two variables of different namespace?

    Hi all,
    I am facing a problem manipulating XML data in a BPEL process. As I didn't get any response on my question in the BPEL forum, I decided to post it again in this forum. I am not very familiar with using XML up to this moment, so please don't blame me if this turns out to be a stupid question.
    I have to combine information coming from 2 different sources (1st delivered by BPEL-client input "FileInfo", 2nd delivered by a ftp-adapter "FileData") for passing it to another service.
    So there are 3 XSDs (2 in, 1 out), each with a different namespace. As every structure is composed of several elements, I don't want to copy them one by one. But when copying a whole node including child elements, the target structure yields empty nodes, because the target child elements maintain the source namespace!
    Using 2 XSLT's works well for each input, but in combination, the last transformation deletes the information of the first one.
    So, the only way to merge the data I figured out by now, is to use a XSLT for each input, transforming it into 2 separate Variables belonging to the same namespace and then copying the complete nodes of the two variables into the target (which of course belongs to the common namespace as well).
    Although there is an inconsistent indexing of the namespaces (see example), happily all data can be accessed.
      <CommonData>
        <CommonData xmlns="http://TargetNamespace.com/CommonData">
          <FileInfo xmlns="http://TargetNamespace.com/CommonData">
            <Name>testfile2.txt</Name>
            <Size/>
            <DateReceived>2009-02-10T17:15:46+01:00</DateReceived>
          </FileInfo>
          <FileData xmlns:ns0="http://TargetNamespace.com/CommonData">
            <ns0:KOPF>
              <ns0:Id>1</ns0:Id>
              <ns0:Value>1</ns0:Value>
              <ns0:Text>Hier könnten Ihre Daten stehen -</ns0:Text>
            </ns0:KOPF>
            <ns0:POSI>
              <ns0:Id>1</ns0:Id>
              <ns0:Position>1</ns0:Position>
              <ns0:Value>1</ns0:Value>
              <ns0:Text>eins ----</ns0:Text>
            </ns0:POSI>
            <ns0:POSI>
              <ns0:Id>2</ns0:Id>
              <ns0:Position>2</ns0:Position>
              <ns0:Value>2</ns0:Value>
              <ns0:Text>zwei ----</ns0:Text>
            </ns0:POSI>
          </FileData>
        </CommonData>
      </CommonData>Now for the question:
    Is this really the only way to merge variables? As it took 4 operations to achieve this, it seems to be too complicated, and therefore too inperformant to me.
    BTW: In real life there would be up to 9999 positions with much more payload (in total up to several MBs) to be processed!
    Having full control of the namespaces in this case, I could think of using one namespace for all XSDs too. But would this be a better solution? Is there a golden rule on using namespaces relating to design issues?
    Any comments appreciated,
    cheers, Marco

    Well, if you only want to change namespace (no other changes) I would use a single generic XSL that changes the namespace for any XML document into a given target namespace, keeping the rest. That will require two transformation calls, but only one XSL. I'm afraid I don't have an example available, but if you google perhaps you can find it, I have seen it somewhere.
    Normally when you have different namespaces the contents differ as well, though, and what you really want is a merge function. There is a way to pass two documents into a single XSL and that is to use parameters. You pass one source document the normal way and assign the other converted to a string to a paramter. You must use an assign with processXSLT in BPEL, not a transform activity. The processXSLT really takes three arguments and the third contains the parameters. Now to the difficulty - in the transformation you need to change the string into a nodeset (an XML document). In the current XPath version used by the platform there is no nodeSet function, but it is possible to write one in Java. If you have it you can select from both documents while building the third. I don't have an example handy and yes it is a bit messy as well, but much of the work only has to be done once and can be reused in other processes.

  • Can't create new undo tablespace while undo tablespace is crushed

    We currently encounter an emergent error on our productive database. The undo tablespace on this database looks like crushed. And insert /update operations can't be executed because of the undo tablespace error.
    we tried to create a new undo tablespace to replace this broken tablespace, but the database reports ORA-00376 and ORA-00376 error.
    We have already deleted rollback segments which status is "NEED RECOVERY".
    Now we run our database in manual undo tablespace management mode. and add a new datafile into current broken undo tablespace. Then create a new rollback segment for public user. Now, user can insert /update data. But we are not sure if this method would occur any other problems.
    I really appreciate if you have any good suggestions.

    Hi,
    Actually ORA-00376 is the following cause and take action.
    cause: An attempt was made to read from a file that is not readable. Most likely the file is offline.
    Action: Check the state of the file. Bring it online.
    Again setup ur DB automatic UNDO MANAGEMENT and check status of the ur undo tablespace. and bring it online.
    and create another undo tablespac and switching the undo tablespace.
    Regards..

  • Create table with logging clause....

    hi,
    just reading this http://docs.oracle.com/cd/B19306_01/server.102/b14231/tables.htm#ADMIN01507
    it mention under Consider Using NOLOGGING When Creating Tables:
    The NOLOGGING clause also specifies that subsequent direct loads using SQL*Loader and direct load INSERT operations are not logged. S*ubsequent DML statements (UPDATE, DELETE, and conventional path insert) are unaffected by the NOLOGGING attribute of the table and generate redo.*
    Help me with my understanding. Does it mean that when you create a table using logging clause and then when you need to do a direct loads, you can specify nologging to reduce the redo log. The nologging part will only work for this activity but not on DML operation even when you specify nologging?

    sybrand_b wrote:
    Nologging basically applies to the INSERT statement with the APPEND hint. Direct load means using this hint.
    All other statements are always logging regardless of any table setting.>
    Sybrand Bakker
    Senior Oracle DBAi did a few test :
    create table test
    (id number) nologging;
    Table created.
    insert into test values (1);
    1 row created.
    commit;
    Commit complete.
    delete from test;
    1 row deleted.
    rollback;
    Rollback complete.
    select * from test;
            ID
             1there is no logging on table- level and tablespace - level. So what i am doing is to check -> Subsequent DML statements (UPDATE, DELETE, and conventional path insert) are unaffected by the NOLOGGING attribute of the table and generate redo."
    the above makes me confuse cos isn't rollback related to UNDO tablespace not redo log. So one can still rollback even when it is in no logging.
    REDO log is for rolling forward , recovery when there is a system crash... ..

  • Transports of Business Content

    I am having problems transporting Business Content and have decided to take a new approach.  I will transport bottom up.  How can I delete/rollback all the previous transports which have failed.  I have only transported from Development to QA? Thanks

    Hi Niten,
    Sometimes working with the transport connection and business content is wierd. I generally follow the development cycle. 1. What are the info-objects in the cube. If some are not activated then activate them.
    2. Are we loading master data for them then activate the associated objects like transfer structures, treanfer rules, infosourcs, etc.
    3. Is ODS loaded before the cube then activate the ODS and the associated datamart.
    4. Finally activate the cube.
    Follow the same cycle for transports. As when you activate/create some objects add them to your trans port request. This way you do not transport any unncessary objects to the Target system.
    Hope this helps.
    Bye
    Dinesh

Maybe you are looking for

  • CS 6 and Windows 7, 8

    Because of security problems with Windows Internet Explorer and Windows XP I may be forced to upgrade to Windows 7 or 8. Are there any problems with CS 6 working with these newer version of windows? Do I have to reinstall CS 6?

  • Why do we have to pay to upgrade, just to download the latest version of I-Tunes, when u can do it free on a P.C

    Why is it we have to go to the trouble of paying for and upgrading the OS version, i currently have 10.4.11, just to download the latest I-Tunes onto my Mac. We have a P.C running I-Tunes and we can download the latest version without the "upgrade".

  • Capturing data from printer (Officejet)

    Hello! I'm  trying to write a program that will monitor the usage of printers remotely. I would like to get data about total page count on the printer and if possible how much ink (ml) is being used on a specific device (to determine ratio: black or

  • Oracle Identity Manager looking for OracleAS 10.1.3.x

    Hi, I need to install the following products on 2 servers Oracle Identity Management Suite (OID, DAS, SSO) as a OracleAS Active-Active Cluster. Oracle Identity Manager in Cluster Oracle Access Manager in Cluster Oracle Virtual Directory in Cluster wi

  • Large interactive form ( ZCI layout ) SAVE problem

    Hi Experts, Please advice. We migrated our interactive forms to ZCI layout and they work fine. But when the form size exceeds 2 MB (this varies), we observed the following two cases. <b>Case 1 )</b> In upload of an offline form, on 'ShowForm' action,