By using decode function, count mismatch between source and target

Source query:
select distinct
frm_track_id,
--ATTRIBUTE_NAME,
SUBSTR(ATTRIBUTE_NAME,INSTR(ATTRIBUTE_NAME,'_',1,1)+1,INSTR(ATTRIBUTE_NAME,'_',1,2) - (INSTR(ATTRIBUTE_NAME,'_',1,1)+1)) LOCUS,
frm_id,
frm_maj_version,
--'null' as ETL_LOAD_STATUS,
--'null' as ETL_LOAD_DTE,
--'null'as ETL_LOAD_STATUS2,
DECODE(attribute_name,( select attribute_name from dual where attribute_name like '%_allele1'), OUTPUT_ANSWER_VAL, null) ORIGINAL_LOC_ALLELE1
--DECODE(attribute_name, (select attribute_name from dual where attribute_name like '%_allele2'), OUTPUT_ANSWER_VAL, null) ORIGINAL_LOC_ALLELE2,
--DECODE(attribute_name, (select attribute_name from dual where attribute_name like '%_ser1'), OUTPUT_ANSWER_VAL, null) ORIGINAL_LOC_SER1,
--DECODE(attribute_name, (select attribute_name from dual where attribute_name like '%_ser2'), OUTPUT_ANSWER_VAL, null) ORIGINAL_LOC_SER
from iidb_stg.stage_fn_base_translate
where attribute_name in ( 
'loc_a_allele1','loc_a_allele2','loc_a_ser1','loc_a_ser2', 
'loc_b_allele1','loc_b_allele2','loc_b_ser1','loc_b_ser2', 
'loc_c_allele1','loc_c_allele2','loc_c_ser1','loc_c_ser2', 
'loc_dp_ser1','loc_dp_ser2', 
'loc_dpa1_allele1','loc_dpa1_allele2', 
'loc_dpb1_allele1','loc_dpb1_allele2', 
'loc_dq_ser1','loc_dq_ser2', 
'loc_dqa1_allele1','loc_dqa1_allele2', 
'loc_dqb1_allele1','loc_dqb1_allele2', 
'loc_dr_ser1','loc_dr_ser2', 
'loc_drb1_allele1','loc_drb1_allele2', 
'loc_drb3_allele1','loc_drb3_allele2', 
'loc_drb4_allele1','loc_drb4_allele2', 
'loc_drb5_allele1','loc_drb5_allele2' 
and frm_track_id=308678
order by 1,2
result set:
frm_track_id
LOCUS
frm_id
frm_maj_version
ORIGINAL_LOC_ALLELE1
308678
a
2005
1
205
308678
b
2005
1
3502
308678
c
2005
1
401
308678
dqb1
2005
1
301
308678
dqb1
2005
1
null
308678
drb1
2005
1
403
308678
drb1
2005
1
null
Target Query:
select distinct
FRM_TRACK_ID,
LOCUS,
FRM_ID,
FRM_MAJ_VERSION,
--ETL_LOAD_STATUS,
--ETL_LOAD_DTE,
--ETL_LOAD_ID,
--ETL_LOAD_STATUS2,
ORIGINAL_LOC_ALLELE1
--ORIGINAL_LOC_ALLELE2,
--ORIGINAL_LOC_SER1,
--ORIGINAL_LOC_SER2
from
IIDB_STG.OPS_FN_SBJCT_TYPING
where FRM_TRACK_ID=308678
order by 1,2
result set:
frm_track_id
LOCUS
frm_id
frm_maj_version
ORIGINAL_LOC_ALLELE1
308678
a
2005
1
205
308678
b
2005
1
3502
308678
c
2005
1
401
308678
dqb1
2005
1
null
308678
drb1
2005
1
403

In Source, you are selecting from  iidb_stg.stage_fn_base_translate, however in target query you are selecting from IIDB_STG.OPS_FN_SBJCT_TYPING. the only condition in both where clause is of FRM_TRACK_ID=308678, so i assume that the missing 2 records are not present in
IIDB_STG.OPS_FN_SBJCT_TYPING table. Modifying the target query might not resolve issue but you have to check why these two records are missing from this table in first place.

Similar Messages

  • One problem with constraints missing in source and target database....

    DB Gurus,
    In my database I am going to export and import data using datapump but dont specify CONTENT clause.
    so when I start working on my application, I came to know there are some constraints are missing. now how to check what are the constraints are missing from Source database and how to create those missing constraints on Target database?
    Suggest me if you have any idea.

    Create a database link between source and target schema.
    Use all_/dba_/_user_constraints to generate the difference using MINUS operator.
    something like this:
    select owner,constraint_name,constraint_type from dba_constraints@source
    MINUS
    select owner,constraint_name,constraint_type from dba_constraints

  • Count mismatch in source

    Hi,
    Recently i have perform one import in newly create machine.
    after import, when i counted a records from different table , it is mismatching. in one table1 2030+ records are missing in compare to target database and in another table 1,00,000 records are more compare to source database.
    For less records i can see the import log file and found out which row is not imported bcoz of ORA-12899.
    But i am not able to understand why the records are more in another table ?
    i am not creating the table structure(tables) manually before import the records in new database.
    i m using IMPORT utility.
    Can any buddy suggest and resolve my problem.
    Description of machines are as follows:
    Source database:
    OS: Windows XP
    Database version:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    Target database:
    OS: Windows 2008 Server 64 bit on VMWare
    Database version:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    Please help to resolve this issue.
    Thanks,

    No NLS_CHARCTERSET are different in source and target database ... it is same.
    But i m export the NLS_CHARACTER manually before running the Export command.
    so that i can match the target database.(Bcoz in target database i don't have control)
    Source database NLS setttings:
    NLS_LANGUAGE AMERICAN
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET               WE8MSWIN1252
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_RDBMS_VERSION 10.2.0.4.0
    Target database NLS settings:
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET               AL32UTF8
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_RDBMS_VERSION 11.2.0.1.0

  • How to use DECODE function in Exspression?

    Hi,
    Can we use DECODE in Expression?
    I'm trying to use DECODE function but there is an error during the validation. But when i validate the mapping, it is successfully compiled but it is failed during deployment.
    But if I use CASE instead of DECODE, it works fine.
    Can we use DECODE in OWB???
    Thanks
    Raj

    Hi,
    In OWB 10gR2, if your are using only one DECODE in an expression, it's working. The package will compile when deploying the mapping. OWB will replace the DECODE by a CASE.
    But when you are using nested decode in an expression ( for example : decode(col1, 1, 'M', decode(col2, 'Madame', 'Mme', null)) ) only the first one is replaced by a case at deployment.
    In ROW_BASED mode, text of the expression is used outside of an sql statement and deployment will fails with "PLS-00204: function or pseudo-column 'DECODE' may be used inside a SQL statement only."
    If operating mode for the mapping is set to SET_BASED, it's working because the expression is used in an sql statement.
    I have logged a SR in metalink for this issue and a bug is opened (bug 5414112).
    But I agree with you, it's better to use case statement.
    Bernard

  • How can we use DECODE function in where clause.

    Hi Guys,
    I have to use DECODE function in where clause.
    like below
    select * from tab1,tab2
    where a.tab1 = b.tab2
    and decode(code, 'a','approved')
    in this manner its not accepting?
    Can any one help me on this or any other aproach?
    Thanks
    -LKR

    >
    I am looking for to decode the actual db value something in different for my report.
    like if A then Accepted
    elseif R then Rejected
    elseif D then Denied
    these conditions I have to check in where clause.
    >
    what are you trying to do?
    may be you are looking for
    select * from tab1,tab2
    where a.tab1 = b.tab2
    and
       (decode(:code, 'A','Accepted') = <table_column>
        or
        decode(:code, 'R','Rejected') = <table_column>
       or
        decode(:code, 'D','Denied') = <table_column>
       )

  • Using decode function without negative values

    Hi friends
    I am using oracle 11g
    I have at doubt regarding the following.
    create table Device(Did char(20),Dname char(20),Datetime char(40),Val char(20));
    insert into Device values('1','ABC','06/13/2012 18:00','400');
    insert into Device values('1','abc','06/13/2012 18:05','600');
    insert into Device values('1','abc','06/13/2012 18:55','600');
    insert into Device values('1','abc','06/13/2012 19:00','-32768');
    insert into Device values('1','abc','06/13/2012 19:05','800');
    insert into Device values('1','abc','06/13/2012 19:10','600');
    insert into Device values('1','abc','06/13/2012 19:15','900');
    insert into Device values('1','abc','06/13/2012 19:55','1100');
    insert into Device values('1','abc','06/13/2012 20:00','-32768');
    insert into Device values('1','abc','06/13/2012 20:05','-32768');
    Like this I am inserting data into table for every 5 minutes Here i need the result like
    output:
    Dname 18:00 19:00 20:00
    abc 400 -32768 -32768
    to retrieve this result i am using decode function
    SELECT Dname,
    MAX(DECODE ( rn , 1,val )) h1,
    MAX(DECODE ( rn , 2, val )) h2,
    FROM
    (SELECT Dname,Datetime,row_number() OVER
    (partition by Dname order by datetime asc) rn FROM Device
    where substr(datetime,15,2)='00' group by Dname.
    According to above data expected result is
    Dname 18:00 19:00 20:00
    abc 400 600(or)800 1100
    This means I dont want to display negative values instead of that values i want to show previous or next value.
    Edited by: 913672 on Jul 2, 2012 3:44 AM

    Are you looking for something like this?
    select * from
    select dname,
           datetime,
           val,
           lag(val) over (partition by upper(dname) order by datetime) prev_val,
           lead(val) over (partition by upper(dname) order by datetime) next_val,
           case when nvl(val,0)<0  and lag(val) over (partition by upper(dname) order by datetime) >0 then
             lag(val) over (partition by upper(dname) order by datetime)
           else 
             lead(val) over (partition by upper(dname) order by datetime)
           end gt0_val
    from device
    order by datetime
    where substr(datetime,15,2)='00';Please take a look at the result_column gt0_val.
    Edited by: hm on 02.07.2012 04:06

  • How to use decode function

    Hello Friends,
    I have a query that has different columns and I am not sure what the data type of each column is ...
    I want to use decode function which displays the following if the value of the column is like this ..
    if the value of column is -714E ie -700000000000000 then display as .A
    if the value of column is -814E ie -800000000000000 then display as .B
    NOTE ; don't know which column is having the value - and i don't know the data type of the column is ..
    got to use the decode function for all the columns selected ..
    I want to use decode function pls let me know how to write for this kind of requirement.
    appreciate your help in this rgds.
    thanks/kumar

    Dear Sir / Madam,
    Thanks for your understanding.
    Right now I am facing a unique problem.
    I am using decode function in a select statement. The columns are dynamic some columns are of type number and some are varchar datatypes. I want to apply decode function for every column irrespective of datatype.
    What i am finding difficult is if the column is a number datatype , the decode funciton is working good but if the column datatype is varchar or char datatype then i am getting ..
    here's the decode function..
    decode (CHAI.EVNDRIVE,
    -800000000000000,'.A',
    -700000000000000, '.B',
    -600000000000000 ,'.C',
    -500000000000000 , '.D',
    -400000000000000 , '.E',
    -300000000000000 , '.F',
    -200000000000000, '.G',
    -100000000000000 , '.H',
    -1000000000, '.R' ,CHAI.EVNDRIVE ) EVNDRIVE
    report error:
    ORA-01722: invalid number
    pls let me know how to over come this kind of scenario.
    is their a way to find whether the column datatype is number or char , so that i can check the column datatype before the decode funciton is applied ..
    thanks ..
    kumar

  • Using aggregate function count

    hi experts,
    I want to select 2 fields from a table and put into structure (gw_konzs.) and with respect to that  i want to use aggregate function count(*) for knowing how many rows has been selected ? and i dont want to use select & endselect
    please help regarding this ...........
    for example:
    select konts ltext from ztab into corresponding fields of gw_konzs.
    select count(*) from kna1 into gw_konzs
    where konzs = gw_konzs-konzs.
    append gw_konzs to gt_konzs
    endselect.

    hi experts,
    I want to select 2 fields from a table and put into structure    and i want to select based on the where condition i dont want to use select & endselect
    please help regarding this ...........
    rewarded if useful
    for example:
    gw_detail & gw_konzs is structure
    gt_detail & gt_konzsis internal table
    select kunnr name1 from kna1 into corresponding fields of gw_detail where konzs = gw_konzs-konzs.
    gw_detail-konzs = gw_konzs-konzs.
    append gw_detail-konzs to gt_detail-konzs
    endselect.

  • Cloning use the different gcc version between source system and target sys

    Hi All,
    Our system is: Application tier and Database tier is split to two servers.
    We should run a cloning, but I found the different gcc version on application tier on source system and target system.
    The source application tier server is RedHat Linux ES3, gcc version is 3.2.3
    The target application tier server is RedHat Linux ES3, gcc version is 2.9.6
    There is the same gcc version on database tier on source system and target system, they are gcc 2.9.6.
    My question: Can I use the different gcc version between source system and target system when I run an erp cloning?
    Thanks & Regards
    Owen

    Not necessarily, you might get some errors if the version is higher and it is not supported by the OS. An example is Note: 392311.1 - usr/lib/gcc/i386-redhat-linux/3.4.5/libgcc_s.so: undefined reference to 'dl_iterate_phdr@GLIBC_2.2.4'
    To be on the safe side, make sure you have the same version (unless you want to give it a try).

  • Data mismatch between 10g and 11g.

    Hi
    We recently upgraded OBIEE to 11.1.1.6.0 from 10.1.3.4.0. While testing we found data mismatch between 10g and 11g in-case of few reports which are including a front end calculated column with division included in it, say for example ("- Paycheck"."Earnings" / COUNT(DISTINCT "- Pay"."Check Date")) / 25.
    The data is matching for the below scenarios.
    1) When the column is removed from both 10g and 11g.
    2) When the aggregation rule is set to either "Sum or Count" in both 10g and 11g.
    It would be very helpful and greatly appreciated if any workaround/pointers to solve this issue is provided.
    Thanks

    jfedynic wrote:
    The 10g and 11.1.0.7 Databases are currently set to AL32UTF8.
    In each database there is a VARCHAR2 field used to store data, but not specifically AL32UTF8 data but encrypted data.
    Using the 10g Client to connect to either the 10g database or 11g database it works fine.
    Using the 11.1.0.7 Client to go against either the 10g or 11g database and it produces the error: ORA-29275: partial multibyte character
    What has changed?
    Was it considered a Bug in 10g because it allowed this behavior and now 11g is operating correctly?
    29275, 00000, "partial multibyte character"
    // *Cause:  The requested read operation could not complete because a partial
    //          multibyte character was found at the end of the input.
    // *Action: Ensure that the complete multibyte character is sent from the
    //          remote server and retry the operation. Or read the partial
    //          multibyte character as RAW.It appears to me a bug got fixed.

  • Can We use FDM as ETL tool between SQL and Oracle

    I want to use FDM as ETL tool between SQL and Oracle. Can it be possible. I didn,t found any target adapter for oracle database.My source system is SQL and Target system is Oracle database.
    Rahul
    Edited by: user12190125 on Nov 9, 2009 4:23 AM

    Rahul,
    I believe this is possible to do, but not an easy one and there are a few considerations:
    How much data are you processing? FDM has a lot of features which support the business process. While this is great for users and audit trail etc. it slows down performance if you want to process a lot of data. It also depends on the type of mappings you use (Like mappings are slower than explicit mappings).
    How familiar are you with VBScript? There is no explicit target adapter for Oracle, but there is a data mart adapter which can be used for anything. You have to implement everything yourself though, mainly the Export and Load actions. In there you will also have to handle the the connections to the MSSQL and Oracle databases.
    Check the data mart adapter and see if you feel comfortable with defining the vb code in there. There are reasons for and against this approach. ODI would probably be the better choice unless you really need to have FDM's process support.
    Regards,
    Matt

  • Color mismatch between CS4 and Canon PIXMA PRO6500

    Hi there! I am trying to research if anyone else is having a problem with color mismatch between CS4 and their printer. Last night I printed a .tiff image out of CS4 to my Canon PRO6500 printer and the sky came out purple / pink! It was supposed to be a stormy grey color. I have the color management turned off on the printer settings. I have CS4 manages color turned on and the correct ICC profile selected for the paper I am using. When I do a soft proof in CS4 the image looks fine, but when I print I get something completely different, mostly with the sky (purple / pink) color! Just for the heck of it I did a Print Preview under my Page Setup and guess what, the sky actually shows purple / pink! So I have no idea what's going on. I checked all my setting, ink cartridges, tried different paper with the appropriate ICC profile and no dice. Still have a goofy looking sky! I thought PROOFING the image is supposed to show you how the printed image is going to look? apparently it doesn't because it's not. The Print Preview has an accurate representation of the print but I have no idea where the pink / purple sky is coming from. They are supposed to be stormy clouds as I said. I appreciate any help anyone can provide .
    Sabciu

    I have the color management turned off on the printer settings.
    But do you have a measured color profile based on a sample print? Most likely not, so this is not doing anything in your favor. The current profile may present the defaults for a given printer, but not the actual density based on your other printer settings. So by all means, unless you can be sure to obey a full calibration throughout, turn on the printer CM. Also, since it is a Pro model, you may actualyl have to use the dedicated RIP software that was bundled with it to get correct results, not the default windows print engine.
    Mylenium

  • Contrast mismatch between Organizer and Editor

    I can edit an image in Editor to where it looks great, to include the contrast, brightness, shadows, etc.  When I send it back to the Organizer, it look much too contrasty and dark.  When I use the same images in a slideshow originating in Organizer, they still look too contrasty.  Is there a way to correct this?  I'm using PSE 10 on a Windows 7 -64 bit machine. Thanks for any ideas.

    Thanks for your quick response.  I just checked out the settings in both the Editor and Organizer and they're both set on "let me decide".  I'm not sure what to do on that one.  I could go with optimizing the computer screen with sRGB, or the Adobe RGB for the printer, or no management, or decide on the fly.  If I go with the computer, I guess I'd have to change it when I print a photo out, and vice versa.  Have any suggestions for me? Thanks again. 
    New update.  I have a Dell color laser printer, and I just checked the manual.  It offers both sRGB and AdobeRGB.  My priority is really to have a good printout.   If I check sRGB I should be safe for either print or computer, but I'm wondering if I'd get as good a printout with sRGB as with AdobeRGB.  The easiest might be to see what the printer is curently set on, and then set PSE Organizer and Editor to match it.  I'll do that, and then see if that helps the mismatch between Organizer and Editor. 

  • Mismatch between availability and marker

    I have run the following command:
    /repvfy -detail -fix -pwd password
    and am seeing a section called "Mismatch between availabiltiy and marker". What does this mean? I know there is a bug out from Note: 735957 against 10.2.04 and that patch 6951116 is available. I also know this bug is fixed in 10.2.0.5. But I don't know what exactly this section is telling me. Here is the output I am seeing:
    6. Mismatches between availability and marker
    TARGET_GUID TARGET_NAME TARGET_TYPE MARKER_AVAIL_STATUS MARKER_TIME CURRENT_STATUS START_TIME
    9CB8EE19415C3C6B55954BF3346C1106 ssan-Internal Concurrent Manager oracle_apps_icm 6 09-JUN-2009 10:41:50 1 28-MAY-2009 14:25:22
    1B21AAEF5AC63DD44FE339B7852EA1EE ssan-Report Server_ssan_devstar2 oracle_apps_reports 6 09-JUN-2009 10:41:50 1 28-MAY-2009 14:25:13
    F5EAD749526EC1116A70157771E3D011 ssan-Oracle Workflow Agent Listener oracle_apps_wfalsnr 6 09-JUN-2009 10:41:50 1 28-MAY-2009 14:25:29
    CD32714673BC508CDBF85BF191A5A1B7 ssan-Workflow Background Engine oracle_apps_wfbg 6 09-JUN-2009 10:41:50 1 28-MAY-2009 14:25:26
    FADA1D6BF4DFD0E8AA1739A6666BA210 SSAN oracle_database 6 09-JUN-2009 10:41:50 1 28-MAY-2009 14:25:40
    0BE2ACE2EDC7271744DA1A249E30C9A5 APPS_ssan oracle_listener 6 09-JUN-2009 10:41:50 1 28-MAY-2009 14:25:12
    89D241EEEF24821E30AD66741BC3D433 devstar2.rtd-denver.com oracle_listener 6 09-JUN-2009 10:41:50 1 28-MAY-2009 14:25:20
    F40BAF501645A5957A498E7A270AEA47 ssan-Apps Listener_ssan_devstar2 oracle_listener 6 09-JUN-2009 10:41:50 1 28-MAY-2009 14:25:28
    8 rows selected.

    Various provision like Freight, transport etc .
    these should go to separate GL accounts. if you give wrong account assignment value will be passed to GRIR. then difference will happen
    sunoj

  • Mismatch between sound and video

    When I watch a movie in iTunes of Apple TV (2gen) and I want to move it few seconds/ minutes and more backwards then it like "losing its mind",
    the video jumps forward (a lot forward) and the sound stays the same, so ther is mismatchment between sound and video.
    What can i do to solve this problem?
    Thank you!

    Am I the only one suffering this issue with the video transcoding?

Maybe you are looking for

  • How can I import my keyboard shortcuts from my iPhone to my new MACBOOK pro?

    Now I'm guessing that 99% of you don't use this feature on iPhone or Macbook so I'm curous to know who knows the answer I recruit a lot and have canned texts and emails (which means I usually say the same thing) in my iPhone I set it up (by going to

  • AppleTV as second display for Macbook

    Hello! I just received a new Apple TV (2nd Gen) as a gift. I like to use the Macbook to stream online content (i.e. football games, Hulu). I connect my computer to our 27" television and use the TV as a second monitor. The issue: My TV has one HDMI p

  • Swatch Library Panel - Accessing Pantone colours

    Hi I'm using CS3 Illustrtaor and want to access the Pantone library but it doesn't appear in the list of Swatch Libraries.  I have access to the Pantone library in CS3 Photoshop, In Design etc. so how do I access it from Illustrator? Thanks

  • Error  when using Runtime Workbench

    HI all i am getting following error when iam trying to use the Component monitoring and others in work bench.. Regards Buddhike *An error occurred:com.sap.aii.rwb.exceptions.BuildLandscapeException: Error during communication with System Landscape Di

  • Cannot open/launch Mail and iPhoto following Mac OSx 10.6.8 upgrade

    Please, help me. i installed 10.6.8 update and now i cannot launch mail and iPhoto apps i dont know whats happend. I install iOs5 beta 3 on my iPad this day. please help me if you know.