Use of Incremental Recon Attribute in SearchReconTask

I am attempting to implement Incremental Recon for a custom connector. I am using the oracle.iam.connectors.icfcommon.recon.SearchReconTask and wish to implement an Incremental Recon using the Incremental Recon Attribute. I have added that and Latest Token in the XML when I create my custom Scheduled Task with my custom attributes.
The JavaDoc for this class states:
The following task parameters are supported:
Filter - Filter to be used in SearchApiOp call
Incremental Recon Date Attribute, Incremental Recon Attribute - if the connector supports some attribute which is a good candidate for usage by incremental reconciliation, the attribute name can be specified by one of these parameters, if specified then the SearchApiOp will be executed with Filter containing GreaterThen(${IncrementalReconAttribute}, ${LatestToken}), the difference between these two paramters is that if Incremental Recon Date Attribute is specified, then Latest Token will be formatted as String
Latest Token - If Incremental Recon Date Attribute or Incremental Recon Attribute it will be holding the latest value of the attribute which is specified as incremental
When I define the connector I include the org.identityconnectors.framework.spi.operations.SearchOp interface and the executeQuery as follows:
public void executeQuery(ObjectClass oclass, Object filter,
ResultsHandler resultsHandler,
OperationOptions operationOptions) {
When I execute, the filter is always null. I have attempted this with valid values for Latest Token. I am not seeing either the Incremental Recon Attribute or the Latest Token in the OperationOptions list. Nothing is coming through in the Filter value either.
When I look at the FlatFileConnector.java class in the example, I am seeing an attempt to get a value of LatestToken (no space) which I assume is an error?? Was this tested?
Can anyone provide a real concrete example of using the SearchReconTask with Incremental Recon and explain the process flow?

You will have to implement FilterTranslator as well. If you look at SearchOp at http://docs.oracle.com/cd/E21764_01/apirefs.1111/e24834/toc.htm you will see 2 methods to be implemented. One createFilterTranslator and second executeQuery.
You will have to implement FilterTranslator. ICF provides [AbstractFilterTranslator for you to use. Just extend this class and provide implementation for createGreaterThanExpression and createGreaterThanOrEqualExpression APIs.
These methods will be called for filter object. See  GreaterThanFilter for more information.
Your implementation should take the attribute and construct a query in string format which your target can understand and return it. This query will sent to your executeQuery API. Without these only null will be sent.
Edited by: 855254 on Apr 19, 2013 5:24 PM
Edited by: 855254 on Apr 19, 2013 5:27 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Issue with Incremental Recon?

    We're facing an issue with incremental recons. If a full recon is run on a resource and an entry exists on the resource but NOT in IdM, the entry is put in the account index with a situation of UNMATCHED.....so far so good....
    But if an IdM user is created for that entry, and incremental recon is run the incremental does not link up the resource account to the IdM user. It seems that the incremental doesn't handle the situation change of UNMATCHED to CONFIRMED?....Not sure if this is a bug (could've sworn that this used to work in a previous version of IdM) or if it's the intended functionality, and the full recon only is designed to handle this particular situation change....Thanks in advance for any insight!

    No link. Because the account index already has this resource account flagged as UNMATCHED the incremental reconciliation ignores it.
    You can manually assign the resource to the user account or perform a full reconciliation to resolve this UNMATCHED to CONFIRMED. This doesn't help in the case when there are a large number of UNMATCHED accounts or a full reconciliation takes a long time.

  • Change DBUM default recon attribute

    Hi all,
    I am trying to change default attribute for DBUM target resource reconcilation in OIM 11.1.1.3. What I want to accomplish is to use an UDF OIM attribute instead of the User Login.
    I have changed the Reconciliation Rule "username -> User Login" for "username -> udf_username" but when I run the scheduled task I get a recon event showing "username -> username". I could not realize what I am missing or doing wrong.
    Thanks in advance for your help.

    Same problem again,
    Trying to run target resource recon with dbum the user is not matched:
    On the event details "Reconciliation Data" section in Admin Console it shows Attribute Name: Username and OIM Assigned field: Username (it should be my UDF Attribute).
    The log shows:
    strEntityMatchingRule_in = (((UPPER(USR.USR_UDF_USUARIO_SIIC)=UPPER(RA_ORACLEDBUSER45.RECON_USERNAME8825B9C0)))) which I assume is right.
    I appreciate your help!

  • How to create Exchange dynamic distribution list using multivalue extension custom attribute

    I am trying to create a dynamic distribution list using an ExtensionCustomAttribute.  I am in hybrid mode with Exchange 2013.  The syntax I have is this: 
    New-DynamicDistributionGroup -Name "DG_NH" -RecipientFilter {(ExtensionCustomAttribute2 -eq 'NH')} 
    This works correctly on-prem.  But hosted always results in an empty list.  I can see in dirsync the attribute is in the hosted environment, but for whatever reason, the distribution group gets created but always come up null.
    If I create a group looking at the single valued attributes, such as CustomAttribute6 -eq 'Y', it works correctly on-prem and hosted.  
    If anyone has any suggestions I would appreciate it.

    I don't think I provided enough information about the problem.  Let me add some and see if it makes sense.
    I have an Exchange 2013 on-premise configured in hybrid mode with Office365.  For testing purposes, I have 2 users, Joe and Steve, one with the mailbox on-prem, and the other with the mailbox in the cloud.  Each of them has CustomAttribute6 = 'Y'
    and ExtensionCustomAttribute2 = 'NH'. Dirsync shows these users and these attributes are synced between on-prem and cloud.
    Using on-prem Exchange powershell, I run the following command:
    New-DynamicDistributionGroup -Name "DG_NH" -RecipientFilter {((RecipientType -eq UserMailBox) -or (RecipientType -eq MailUser) -and (CustomAttribute6 -eq 'Y')} 
    This correctly finds the 2 users when I query for them as follows:
    $DDG = Get-DynamicDistributionGroup DG_NH
    Get-Recipient -RecipientPreviewFilter $DDG.RecipientFilter | FT alias
    So I then delete this DG, and recreate it this time looking at the multi-value attribute ExtensionCustomAttribute2, as follows:
    New-DynamicDistributionGroup -Name "DG_NH" -RecipientFilter {((RecipientType -eq UserMailBox) -or (RecipientType -eq MailUser) -and (ExtensionCustomAttribute2 -eq 'NH')} 
    Replaying the query above, I can see this also works fine and finds my two users.
    Next I open a new powershell and connect to Office 365 and repeat the process there.
    New-DynamicDistributionGroup -Name "DG_NH" -RecipientFilter {((RecipientType -eq UserMailBox) -or (RecipientType -eq MailUser) -and (CustomAttribute6 -eq 'Y')} 
    This correctly finds the 2 users when I query for them.
    And then delete the group and recreate it using the multi-value attribute:
    New-DynamicDistributionGroup -Name "DG_NH" -RecipientFilter {((RecipientType -eq UserMailBox) -or (RecipientType -eq MailUser) -and (ExtensionCustomAttribute2 -eq 'NH')} 
    When I run the query this time it produces no result.  Every test I try results in an empty group if I am using a multi-valued attribute in the search criteria in the cloud.  If I use single valued attribute, it works fine.
    I really need to be able to get multi-valued DDG's working in the cloud.  If anyone has done this and has any suggestions, I would appreciate seeing what you did.  And if this is the wrong forum to port this, if you can point me to a more suitable
    forum I will report there.
    Thanks,
    Richard

  • Using adapter specific message attributes in SOAP adapter configuration

    Hi,
    Can you please let me know how to use adapter specific message attributes in receiver and sender SOAP adapter configuration. If possible , with example.

    See here:
    Adapter-Specific Message Attributes in the Message Header
    http://help.sap.com/saphelp_nw04/helpdata/en/43/0a7d1be4e622f3e10000000a1553f7/frameset.htm
    Sender Soap Adapter:
    http://help.sap.com/saphelp_nw04/helpdata/en/fc/5ad93f130f9215e10000000a155106/frameset.htm
    Receiver Soap Adapter
    http://help.sap.com/saphelp_nw04/helpdata/en/29/5bd93f130f9215e10000000a155106/frameset.htm

  • How to set values in JMS Adapter using Adanced specific Message Attributes

    Hi Frnds,
    in my scenarios i have to add extra header information MQ Message,
    using Adanced specific Message Attributes i defined 7 parameters(Transaction,Environment,shema......) all are type Interger.But i have to set values ( Transaction type,Enviroment,shema....).
    Where i can set values ??
    help me on this...
    Regards,
    Raja Sekhar

    Hi Vijay,
    Thanks for your replay,as per your input i creted dynamic configuration method,
    i taget structure  luks like this
       <Data>
           <Mesage>
                 <gl_update>
                      <header>
                            <ean1>
                            <ean2>
                      </header>
                  <gl_update>
           <Message>
    i written UDF and mapped to header element gl_update,
    but i am getting  below error message in MONI
    SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="">
      <SAP:Category>Application</SAP:Category>
      <SAP:Code area="MAPPING">EXCEPTION_DURING_EXECUTE</SAP:Code>
      <SAP:P1>com/sap/xi/tf/_MM_ ffjms_</SAP:P1>
      <SAP:P2>com.sap.aii.mappingtool.tf7.IllegalInstanceExcepti</SAP:P2>
      <SAP:P3>on: Cannot create target element /ns0:MT_jms_ _a</SAP:P3>
      <SAP:P4>sync_out/Data/Message/ GLUpdate. Values missi~</SAP:P4>
      <SAP:AdditionalText />
      <SAP:Stack>Runtime exception occurred during application mapping com/sap/xi/tf/_MM_ ffjms_; com.sap.aii.mappingtool.tf7.IllegalInstanceException: Cannot create target element /ns0:MT_jms_ asyncout/Data/Message/ GLUpdate. Values missi~</SAP:Stack>
      <SAP:Retry>M</SAP:Retry>
      </SAP:Error>
    i want to know i mapped to UDF to GL_Update is is correct, to which element i have to map.
    i think there is no prob with UDF,its executing ,giving warning messages
    help me on this,
    Regaards
    Raj

  • Using content from Strings attributes in Prefix text

    Hello all you EDD experts out there,
    I did not find this specific info in the struct app dev guide, so I am hoping someone else has figured this out at some point in time. Maybe this is trivial to programmers, but I am not one of them and also I don't know what the prefix definition string will allow. So all help is greatly appreciated.
    In my structured framemaker EDD, I am using attributes to collect usage info. It would be nice to be able to use the Strings type attribute for this, instead of collating all the usage info into one single String type. But I need to use the content in a prefix string.
    Prefix: used in: \t<$attribute[validity]>
    If the validity attribute is of the Strings type and I enter multiple string values, only the first one shows up. How can I make all strings show in the prefix ?
    Thanks in advance for the golden tip :-)
    Jang

    Hello Russ and Van,
    I was kind of worried that I was crossing the limits of what FM EDDs can do. It is not a big deal - I can use the one string to store all values, as I have been doing so far. It just looks a little ugly when the string gets longer and the substrings do not wrap nicely. Of course going fully XML and XSLT etc would be another option but that is overkill for this particular client and they won't pay for that. Making the output a little nicer to look at is not a high priority, as this particular output is only shown in a catalogue of repository items to be used internally by my cient, i.e. it will never show in publications for users.
    Thanks for confirming my intuition that I should stop searching for the holy grail...
    Jang

  • Use of Process type Attribute Change Run and Rollup in Process Chaning

    Hi SAP Gurus,
    I have a doubt about the use of process types Attribute Change Run and Roll up in PC
    If any1 can clear my doubts it would be helpful
    if possible if any1 can tell me about the list of process types available in Process Chains it would be helpful
    Thanks in advance

    Change run process is used to re-activate the Attributes after changes
    Roll up process is used to refresh a target (example : aggregates) after new data are uploaded into a Cube

  • How to create a unique key using two or more attributes?

    how to create a unique key using two or more attributes ?

    The following example illustrate how to create a unique key using two or more attributes/fields
    Scenario: Implementing Unique Key on ManagerID & Location ID in DepartmentEO(Department Table)
    Step#1: Open the Desired Entity Object “DepartmentEO”. Go to Overview tab & Click “General” Finger tab.
    Step#2: Expand “Alternate Keys” section & click “+”.
    Step#3: In the Pop-up wizard, Enter a name for alternate Key “Unique_Manager_X_Location”.
    Step#4: Select the desired attributes/fields from available list & Shuffle to right side.
    Step#5: Now go to “Business Rules” finger tab.
    Step#6: Select “Entity Validators” in the list & click “+” to add a new entity level validation rule.
    Step#7: In the Pop-up, Select “Rule Type” as Unique Key
    Step#8: In the “Rule Definition” tab select the key “Unique_Manager_X_Location”created.
    Step#9: Now go to “Failure Handling” tab, and click the Magnifier Icon .
    Step#10: If the key is not already created then in the “Select Text Resource” Popup, Using the functional design document, Enter display value, Key and Description. And click “Save & Select” Button.
    Step#11: Now Click “OK”.

  • Using specification for GDSN attributes?

    We are planning to participate in Global Data Synchronization Network (GDSN).  For that additional master data must be defined and managed.
    SAP does not provide standard solutions for this requirement.
    We use EHSM for GLM and other functions.  We therefore have substances defined for materials.
    We envision expanding the property tree to include GDSN related attributes which define the related material assigned.  Custom programs will be used to report the attributes as required.  The GDSN attributes are not required or related to GLM or other typical EHSM functions.
    My question - would it be inappropriate or incorrect to use the specification database in this manner?  What would be the pro/con of this usage we need to be aware of?

    Dear Richard
    e.g. SAP EHS MANAGEMENT is used to push to others data from MSDS/SDS etc.
    E.g. IMDS Customer and Supplier Collaboration - SAP Product and REACH Compliance - SAP Library
    shows one example. In the area of REACH a lot fo more examples can ge listed. *** well there is a "IUCLIDE" interface etc.
    A lot of regulations, industry bodies require exchange of data; e.g. check: Examples of Data sharing demands in SAP EHS Management using XML
    So as long as the data is related to "EHS" (and does have some material significance) EHS is a good option to store the data and to use SAP EHS Management as the "source" for exchange the data wirth 3rd parties. A lot of IT solutions a possible. most common is the scenario SAPERP( with EHS) => SPA XI/PI => External software; other options arepossible as well.
    What are the pros of using EHS?
    a.) you can easily enhance EHS so that the data can be stored
    b.) you can perform data loads in EHS (unsing IMport, OCC or Data Editor option)
    c.) you can use EHS easily for inquiries on the data stored
    d.) you can print the data in WWI reports
    e.) you can share the data using options as mentioned above
    The use of material classes is "limited" here
    From my point of view:
    You can store data in material classes
    You can do uploads (using SAP standards but on the same level as with EHS)
    You can exchange the data
    You can proint the data using Smartform and other classic ABAP feature as well may be use Adobe integration
    The "inquiry" to the data is possible as well but limited
    In most cases: if you analyze data relevance for EHS > 90% of the data to be shared can be classified as "EHS" data.
    At the end it is a company decision which solution you would like to use.
    C.B.

  • Can I use an incremental backup disk for a restore function?

    I have loaded PSE7 on my new PC. I have a full backup set and also an incremental set created after the full backup. After doing a File/Restore with the full backup set, can I use the incremental set to add the remaining picutures or do I need to do a new full backup first and only use it? If I can use the incremental set, are the steps to load it the same as for the full backup?

    jdrefr wrote:
    I have loaded PSE7 on my new PC. I have a full backup set and also an incremental set created after the full backup. After doing a File/Restore with the full backup set, can I use the incremental set to add the remaining picutures or do I need to do a new full backup first and only use it? If I can use the incremental set, are the steps to load it the same as for the full backup?
    I have given up incremental backups because they don't save any time. I have found that the messages in the restore process are misleading. I don't know for sure why, but to restore, I got the best results by restoring the last incremental backup first, then the older ones and the original. If your incremental backups are correct, each in a separate folder or disk, you have a full version of your catalog at the time of backup stored within the folder or disk. I suppose this is to start with the latest catalog version and completing the process by comparing with previous versions, which ensures not adding deleted items.
    I'd be interested to get other views on that matter !

  • Question about using an incremental backup to update a standby

    We have a 2TB Oracle 10g database with a standby and the application that uses it is about to be updated.  We're using cumulative incremental backups with block change tracking to back it up with weekly level 0 backups.  The application upgrade will be making a lot of changes (several hundred GB) and we have stop the standby during the upgrade process as a quick way to get back (we don't have the space for a flashback recovery) in the event that there are issues with the upgrade.  We can just let the archivelogs back up and get them over to the standby and applied once they decide they like the upgrade but, it looks like it would be simpler and perhaps quicker to use an incremental backup to update the standby.  I probably won't but, If we do that, it looks like we can turn off archivelogs in the primary and not deal with them at all.  I've never tried this but, I do have some questions.
    Does it make sense to take this approach?
    If we were to turn off archivelogs in the primary, would that have any effect on block change tracking?  I don't see why it would but, had to ask.  Of course, we would turn archivelogs back on and take a level 1 backup after the upgrade.
    Thanks in advance for the education.

    Funny you should mention that rolling upgrade,  I'm actually planning to do that in another environment.  My testing says it works pretty well and I'll be able to move a 400GB database to new hardware and upgrade it to 11g with very little downtime.  However, in this case, the database is not being upgraded.  The application is.  It's just that, as part of that upgrade, there are a lot of updates and I'd just as soon not deal with the archivelogs for that. Here's what I was thinking about:
    Set logs shipping to defer in the primary.
    Stop log apply in the standby.
    Turn off archivelogs in the primary.
    Do the application update.
    If they like the results, turn archivelogs back on in the primary.
    Take an rman incremental backup from scn in the primary.
    Apply the incremental backup to the standby.
    Start log log apply in the standby.
    Start log shipping in the primary.
    The process for getting and using the incremental is described here:
    http://docs.oracle.com/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#BGBCEBJG
    This isn't your regular incremental backup.
    So, am I the only crazy one?  Has anyone else tried this?

  • Incremental Recon is not working

    I have customized the Oracle 11i RA. When I am doing full recon it is linking the accounts successfully but when I am doing incremental recon it is failing.
    Does anyone have any idea what could be the possible reason for the same.

    Hi,
    you only have to start incremental update - no reindex or some other configuration issues.
    Perhaps some questions with respect to your problem:
    - When you attached the web repository as a data source, did indexing work?
    - Could you check in the crawler monitor, if the last crawler run for this data source was an incremental run or not and if errors were reported?
    Regards,
    Achim

  • Difference between using Binding and Value Attribute

    what is the deference between using binding and value attribute, when I use binding attribute at the time of Value change listener is behaving like action listener,
    Ex:
    If I use value attribute, at the time of value change listener the component is not showing the result in the component but when I use Binding attribute it is happening automatically. So I want to know how the binding attribute is working.
    I know, at the time of binding attribute the component is creating an instance at the bean side, So even also how it is following the life cycle of the JSF Frame work, and also Please suggest me weather which one is better to use either Binding or Value?

    JNaveen wrote:
    If I use value attribute, at the time of value change listener the component is not showing the result in the component but when I use Binding attribute it is happening automatically. So I want to know how the binding attribute is working.You need to learn about the JSF lifecycle. The ValueChangeEvent is invoked after conversion and validation in the 3rd phase, while the model values are updated in the 4th phase. In the valueChangeListener method you normally use ValueChangeEvent#getNewValue() to get the new value after the change.
    I know, at the time of binding attribute the component is creating an instance at the bean side, So even also how it is following the life cycle of the JSF Frame work, and also Please suggest me weather which one is better to use either Binding or Value?Use the 'value' attribute to bind the value to the bean. Use the 'binding' attribute to bind the component to the bean. If you don't need to precreate the component or do other things than getting/setting its value, then there is no need for the 'binding' attribute.
    Read on those links if you want to know something more about the JSF lifecycle:
    [http://balusc.blogspot.com/2006/09/debug-jsf-lifecycle.html].
    [http://jcp.org/aboutJava/communityprocess/final/jsr252/index.html] (pick 1st download).

  • NPAS: How do I use Cisco ASA RADIUS attribute 146?

    We have a Cisco ASA 5520 running firmware 8.4.5 and are using it for AnyConnect SSL VPN.  We are using Microsoft Network Policy and Access Services (NPAS) as a RADIUS server to handle authentication requests coming from the ASA.
    We have three tunnel groups configured on the ASA, and have three Active Directory security groups that correspond with each one.  At this time, we are using Cisco's vendor-specific RADIUS attribute 85 (tunnel-group-lock) to send back to the ASA a string
    that corresponds to a policy rule in NPAS based on the matched group membership.  This works in the sense that each user can only be a member of one of the three AD security groups used for VPN, and if they pick a tunnel group in the AnyConnect client
    that doesn't correspond to them, the ASA doesn't set up the session for them.
    Well, Cisco added vendor-specific RADIUS attribute 146 (tunnel-group-name) in firmware 8.4.3.  This is an *upstream* attribute, and is one that is sent by the ASA to the RADIUS server.  We would like to use this attribute in our policies in NPAS
    to help with policy matching.  By doing this, we could allow people to be in more than one VPN group and select more than one of the tunnel groups in the AnyConnect client, each of which may provide different network access.
    The question becomes, how can I use this upstream RADIUS attribute in my policy conditions?  I tried putting it in the policy in the Vendor-Specific section under Policies (the same place where we had attribute 85 defined), but this doesn't work. 
    These are just downstream attributes that the NPAS server sends back to the RADIUS client (the ASA).  The ASA seems to ignore attribute 146 if it is sent back in this manner and the result is that the first rule that contains a group the user is a member
    of is matched and authentication is successful.  This is undesirable, because it means the person could potentially select a tunnel group and successfully authenticate even though that isn't what we desire.
    Here is Cisco's documentation that describes these attributes: http://www.cisco.com/c/en/us/td/docs/security/asa/asa84/configuration/guide/asa_84_cli_config/ref_extserver.html

    Philippe:
    Thank you for the response, but I am already aware how to use Cisco's group-lock or tunnel-group-lock with RADIUS and, in fact, we are already using tunnel-group-lock (attribute 85).
    Using tunnel-group-lock works in the sense that you have three RADIUS policies and three AD security groups (one per tunnel group configured on the ASA).  Each AD group basically is designed to map to a specific tunnel group.  Each RADIUS policy
    contains vendor-specific attribute 85 with the name of the tunnel group.  So when you connect and attempt authentication through NPAS, it goes down the RADIUS policies until the conditions match (in this case the conditions are the source RADIUS client
    - the ASA - and membership in a particular AD security group), it determines if your authentication attempt is successful, and if so it sends the tunnel group name back to the ASA.  If the tunnel group name matches the one associated to the user group
    you selected from the list in the AnyConnect client, a VPN tunnel is established.  Otherwise, the ASA rejects the connection attempt.
    Frankly, tunnel-group-lock works fine so long as it is only necessary for a given individual to need to connect to only a single tunnel group.  If there is a need for an individual to be able to use two out of the three or all three tunnel groups in
    order to gain different access, using tunnel-group-lock or group-lock won't work.  This is because the behavior will be when the RADIUS server processes the policies, the first one in the list that has the AD security group that the user is a member of
    will be matched and the tunnel group name associated with that policy will be sent back to the ASA every time.  If that name doesn't match the one they picked, the tunnel will not be established.  This will happen every time if the tunnel group is
    associated with the second or third AD group they are a member of in terms of order in the NPAS policy list.
    Group-lock (attribute 25) works similarly.  In such a case, the result won't be a failure to connect if the user group chosen is associated with the second or third AD group in the policy list; rather, it will just always send the ASA the first group
    name and the ASA will establish the session but always apply the same policy to the client rather than the desired one.
    We upgraded to firmware 8.4.5 on our ASA 5520 specifically so that we could make use of attribute 146 (tunnel-group-name).   Since this is an upstream attribute sent by the ASA to the RADIUS server (rather than something send by the RADIUS server
    to the ASA as part of the authentication response), we were hoping to be able to use it as an additional condition in the NPAS policies.  In this way, people could be members of more than one of the AD security groups related to VPN at a time.  The
    problem is, I just do not know how to leverage it in the NPAS policy conditions or if it is even possible.

Maybe you are looking for

  • Upgrade to 7.6 now no video only sound

    I upgraded to 7.6. Now when I play a music video I hear the audio, but I can't get a picture. All I get is green and purple stripes. I have tried upgrading Quicktime, but get the same problem. Can anyone help. Thanks

  • Using AppleCare and AppleCare+  in Puerto Rico?

    This summer I start medical school in Puerto Rico, and I plan on buying a MacBook Pro and an iPad.  I there are several authorized service providers in PR, including one in the town I will be attending medical school. I would be purchasing it all her

  • Altering pages based on the condition

    Hi all, In a smartform there is a requirement like in the main window, if a variable satisfies some condition then it is required to print the table in the next page other wise it need to be displayed in the first page itself. can any one of you help

  • Value helps in UI(Dynpro) from MDM using webservice

    Hi, I have a UI developed in Webdynpro which takes value helps from MDM SP04. For the value helps i have a webservice which has the code, description and table name. Could anyone help me out on what to do after importing the websevice model.. thanks.

  • Mac OS X Lion does not Appear in the Purchased Apps list

    Hi everyone, I bought my MacBook Pro 15-inch late in July and got the free upgrade promotion for Mac OS X Lion from Apple. Now I went to Mac App Store and verified the purchased items and I don't see Lion listed there. Any idea? Thanks, Abdullah