BI Admin Physical source data
Hi All,
My source data at database server has changed, for example, different column name, adding column.
However ConnectionPool I connect to source data from BI admin tool, the changes are not showing for some reason, I did clear all cache from Manage-Cache, however the changes still not showing.
Any idea? We use OBIEE 11g
Thanks
Don
Restart the services.
Thanks
Diney
Similar Messages
-
Avoiding Rebuild of RPD for Change in Physical Layer Data Source.
Hi,
I would like to know whether change in physical layer data source will require rebuild of RPD or not.
Previously our client was using excel spread sheet as source of OBIEE 11g RPD for Phase – 1 of the project. Successive work (design of BMM layer, Presentation layer, subject area, design of report and dashboard etc) for that phase is complete on top of that.
Now, they would like to use Oracle database as source in Phase - 2, with the same table and column structure as MS Excel spreadsheet.
We have created connection type of OCI 10g/11g and successfully imported the metadata of the oracle database in the physical layer.
I was thinking whether it is possible to just redirect the source of physical layer from MS Excel to Oracle database, keeping the rest of the components (including BMM layer etc.) intact.
This will be pretty tedious job to rebuild all layers of the RPD again and again for change in the underlying database.
Please advise how this is possible. Many thanks !I think..we can do some testing
you have created Alias for all tables.
Try checking view data by changing the Source name of any alias table to Oracle Source name in the physical Layer.
Check the datatypes of columns .
If these two are passed we can eliminate rebuilding.
Please post your comments. -
Changing the LTS and physical source of the presentation fields
I am using 11.1.1.6.2 obiee admin tool, in presentation layer I have 3 different subject areas and multiple folder with different names. I used same field in multiple folder in all the subject areas with same name , in below example I am using same field in subject1 folder-A and same in subject2 Folder-C and so on .If I change the physical source of these fields with new LTS and new logical column, how do I make this change impact to all fields , or should I go field by field delete the old field and add the new field.
please reply with a solution
Example:
Subject1
Folder-A
*field1
*field2
*field3
Folder-B
*field4
*field5
*field6
Subject2
Folder-C
*field1
*field2
*field3
*field4
Folder -D
Folder - E
Subject3
Folder –F
*field1
*field2
*field3
Folder2
Folder3If same logical column is calling with different names in Presentation layer then you can go for changes and those changes get affected in Presentation layer.
Since all those presentation layer columns are sourcing to a single logical column.
If you I change the physical source of these fields with new LTS as new logical column then you need update Presentation layer manually.
If the changes are related to same logical column within the existing logical table then no need of any more changes. -
Inventory management and physical inventory data transfer
hi all,
can anyone plz povide me with inventory management and physical inventory data transfer tutorial or link.
points are guarented.
rgdsThe information behind the blue-button for MI34, MI38 (as you mentioned) does not have enough detail. It's basically one-page. Is there another instructional source available? How is the logical file MMIM_PHYSICAL_INVENTORY_DOCUMENTS tied to the physical, can you clarify? Not sure how to determine where the sequential file being processed needs to be located. Thanks!
-
Source Data from OLE DB staging area to Import for FDM
Hi ! Please help me out on Integartion of Oracle to FDM
I want to load data into FDM directly fom Oracle database through an Integration Script.
How to take Source Data from OLE DB staging area to Import for FDM?
My main intension to import data for FDM from Oracle DB where there is no manual intervention and it could be safe. We are taking Actual data through Oracle Apps and keep that data into Oracle DB for further process.
FDM can also import data from Oracle Data Base Source means I need procedure/script to import data from Oracle data base staging area where we keep some data for HFM to enter the data on monthly basis.
OLEDB----->Import FDM-------> Export to HFM (Procedure\script without manual intervention)
Here in OLEDB data comes cumulative from Day or week to Month basis for inputing to HFM applicaiton.(Actual)
Regards and Thanks.
Edited by: user3557428 on 10-Dec-2008 00:05You will want to look at the FDM Admin Guide
Look for Integration Import Scripts
There are a few pages that cover what you are trying to do. If you do not know VBScript it might be benefitial to tap someone who does or brush up on ADO.
Hope this helps
Wayne Van Sluys
TopDown Consulting -
Source Data file format for FDM
Hi,
I am new in FDM and have couple of question on data loading.. We are using FDM to load the data in Essbase. Please help me on these-
Question 1:
In source data file, suppose, we have around 300 account members and around 100 PO numbers. There are multiple 'PO numbers' records for each 'account member'.
As per our requirement, we need to sum up the all values for 'PO number' column for each account.
For example, suppose, we will get data file as below. Here, Important point is, I don't have dimension for 'PO number ' in Essbase. I will get the accounts data corresponding to 'PO number' but there is no dimension in Essbase to store data at 'PO number' level.
Account column PO number Actual Estimated
A001 P0123_10 10.00 15.00
A001 P0123_20 20.00 25.00
A002 P0123_30 30.00 35.00
A002 P0123_40 40.00 45.00
A002 P0123_50 50.00 55.00
A003 P0123_60 60.00 65.00
A003 P0123_70 70.00 75.00
A003 P0123_80 80.00 85.00
and so on
Now we want to sum up the data for each Account member i.e
Account column Actual Estimated
A001 30.00 40.00
A002 120.00 135.00
and so on ---
As per my understanding, While defining the *'Import format'* for this source data file , we will not consider the column for 'PO number' and will define the mapping with merge option. Hence, multiple record for each account will be accumulated.
Please suggest me on this.. Is my understanding correct or how will handle this type of scenario?
Question2:
Suppose, we have data for multiple scenarios ( i.e. Actual, Estimated etc) in one excel sheet, which will be used in FDM as a source file? As per admin guide, we have POV for Category and data will be loaded for that selected POV only. In this case, will FDM consider the data for each selected POV scenario or how FDM will function in this case... do we need to have separate files for each scenario...
Please help me to solve these problems...
Thanks & Regards,
Mohit JainQUESTION1: If you ignore the PO field in the import you will get the accumulating behaviour you describe. Not sure what you mean by using merge in the mapping. The default behaviour of FDM is to accumalate all non-unique lines following the mapping process you don't have to set anything
QUESTION2: Data will only be loaded to the selected Category in the POV if you have data in the file that needs to be targetted at a different scenario you will need a seperate load file containing that data and choose the alternate category in a seperate manual workflow process -
Mix source data with different granularity into the same fact table?
I have two transaction tables "Incident (157 columns)" and "Unit (70 Colums)". For every "Incident" that happens there could be one or more records in the "Unit" table.
As part of my data mart design, I have merged both the tables into a single Fact "Incident Fact (227 Columns)" and inserted the records from both the tables with a join condition between them [incident.IN_NUM = Unit.IN_NUM].
Is this correct, is my question? or am I mixing source data with different granularity in the same fact table. Appreciate your help.
Best Regards
BeesBees,
Are the measures from 'Incident' , repeated for a given incident where it has more than one record in the Unit table ? If so, then the sum(indicent.measure) will give an incorrect result, no ?
What requirement is there to physically merge the tables together outside of OBIEE? With OBIEE you could have one logical 'fact' table to present to report users, which sourced from seperate Incidents and Units tables and would stop the incorrect aggregations occuring. A common modelling piece in the same way would be Order Headers and Order Lines, quite common in OBIEE to have a logical 'Orders' fact which contained both Order header measures and Order line measures, this translates to your Incidents -> Units relationship.
To do what I've mentioned, is relatively straight forward, you need a 'Dim - Incident' with two levels, Incident and Unit, mapp the unique identifiers in as the level keys and then use these levels to set the content levels correctly in your 2 logical tables sources for logical 'Fact' , ie Incidents LTS at incident level, Units LTS as units level.
Hope this helps, let us know if you get stuck.
Cheers
Alastair -
Query regarding using multiple physical sources
Hi All,
Facing an issue in fetching data from multiple physical sources in OBIEE,
We have 2 facts tables on different databases, having same columns.
How can we get results from both the tables in the presentation layer.
I created one Logical Fact table, added both sources to it, but results are displayed only from logical source which i added first.
OBIEE version:
10.1.3
OS Windows
Thanks,
NikHi Nikhil,
If you have identical columns in both the sources, obiee will always choose only one LTS .To understand how to force multiple LTSs in query, check this thread:
2 table sources in LTS but only 1 in query?
Regards,
Dpka -
Creation of DME medium FZ205 There is no source data found
We are executing payment runs using F110 and then creating data medium - a file to send to the bank.
In the variant for the program I am putting C:\ however when I have several users executing payment runs at the same time, the data medium is not creating and I am getting the error message that the source data cannot be found
Can anyone help me with this issue - should I leave the file name as blank?
Thanks
LizHello,
In order to avoid FZ205 please review your selection parameters and F1 help for the print program when creating the file:
1. If you are taking the Output to file system:
If required, the file can be written to the file system. The created file can be copied to a PC using data medium exchange management. You should be looking for downloaded files here, since the data carrier is not managed within the SAP system, but is already stored in the file system by the payment medium program. The file name should be defined by the user. You should make sure that existing files with the same name have already been processed, because they will be overwritten.
Note:If a file cannot be found using the data medium exchange management the reason could be that the directory that was written to at the start of the payment medium program (in background processing, for example) cannot be read online.
You should then select a directory which can be recorded and read by several different computers. Due to the problems described above and the resulting lack of data security, we advise against writing to the file system. This method is only beneficial if the data carrier file is taken from the file system by an external program, to be transferred to the bank.
2. If you are taking Output into TemSe:
If required, the file created can be stored within the SAP System(store in the TemSe and not in the file system),thus protecting it from unauthorized external access. You can download the file into the user's file system via the DME manager. The name of the file to be created during the download can be determined when running the payment medium program: the contents of the
file name parameter are stored in the management data and defaulted when running the download.
Please check the corresponding files in the DME administration for all files and check if the output medium 'File-System' has been
chosen, that means output medium '0'. In order to use the TemSe you have to use the output medium '1'. Furthermore see if the PC-file- paths, like c:\filename.DAT, instead of application file names. The FDTA has difficulties to find these files, especially by using 2 application servers.
To avoid problems with the files SAP recommends you to use the TemSe with output medium '1', or the file system with output
medium '0'. TemSe is always a better option.
I hope this helps.
Best regards,
Suresh Jayanthi. -
[APEX 3] Requested source data of the report has been modified
Hello APEX-Friends,
I have a common problem but the situation is a bit different here. Many of you might know the "invalid set of rows requested the source data of the report has been modified" problem. Often it occurs on submit. That means, you have a report, you select rows, you do things, you submit the page and everything blews up.
This is because you enter some values into fields the report depends on and so you modify your report parameters and the source data changes.
But:
In my case I have a dynamically created report that blews up before any submits occur or values change.
My query is a union of two selects. Both query different views. Those views use a date field as parameter and some compare functions.
I read the field with a V-Function i wrapped arround the apex V Function - declared as deterministic. My date compare function is also declared deterministic (I doubt this makes any differences as it might be only important for the optimizer, but as long as I don't know exactly what APEX evaluates, I go for sure).
I ensured, that the date field is set by default with the current date (and that works, because my interactive report initially displays correct data from the current date).
So everything is deterministic and the query must return same results on subsequent calls, but APEX still throws this "source data has changed" error and I am to 99.99% sure, that this cannot be true.
And now the awesome thing about this:
If I change the value of the date field, an javascript performs a submit. The page is reloaded (without resetting pagination!) and everything works fine. I can leave the page, reenter, do things - everything works well.
But if I log into the application and directly move to the corrupted report and try to use the pagination without editing fields or submitting the page the error occurs.
Do you have any Idea what's happing there? I could try to workaround this by submitting the page the first time it's entered to trigger this "mystery submit" that gets everything working. But I would like to understand this issue and have a clean solution.
Thanks in advance,
Mike aka UniversEOkay, I found a solution, but I do not understand it - it might be a design flaw in APEX.
I mentioned the date field that is used in the query. I also mentioned that it is set with the current date by default. I did not mention how.
There are some possibilities in APEX to do so.
1. Default-Setting in the element properties
2. Static assignment if no value is in session cache
3. Computation before header
I did the first and second.
BUT:
An interactive report seems to work as follows. A query is executed to get all rows of the report. Then a second query is executed to get the rows that shall be displayed. And the order is screwed up, I think.
1. The first report query to get all rows
2. The elements are loaded and set to default values
3. The second report query to get the display rows
And that's the reason why nothing worked. Scince I added a computation before header the date field is set before the report queries are executed and everything works all fine now.
But I think it's a design flaw. Either both queries shall be executed before Regions or afterwards but not split as field values might change when elements are loaded.
Greetings,
UniversE -
User View is not reflecting the source data - Transparent Partition
We have a transparent partition cubes. We recently added New fiscal year details to the cube (user view as well as source data cube). We loaded the data to the source data cube. From the user view, we tried to retrieve data, it shows up 0's. but the data is availble in the source data cube. Could anyone please provide the information what might be the issue?
Thanks!Hi-
If u haven't add the new member in the partition area, then Madhvaneni's advice is the one u should do. Because, if u haven't add the member, the target can't read the source.
If u have already added the new member in the partition area, and still the data won't show up, sometimes it's worth to try re-save the partition, and see what's the outcome.
-Will -
How to deal with such Unicode source data in BI 7.0?
I encountered error when activating DSO data. It turned out that the source data is Unicode in the HTML representation style. For example, the source character string is:
ABCDEFG& #65288;XYZ (I added a space in between & and # so that it won't be interpreted to Unicode in SDN by web browser)
After some analysis, I see it's actually the Unicode string
ABCDEFG(XYZ
Please notice the wide left parenthesis. It's the actual character from the HTML $#xxx style above. To compare, here is the Unicode parenthesis '(' and here is the ASCII one '(' . You see they are different.
My question is: as I have trouble loading the &#... string, I think I should translate the string to actual Unicode character (like '(' in this case). But how can I achieve this?
Thanks!
Message was edited by:
Tom JerryI found this is called "Numeric character reference", or NCR, in HTML term. So the question is how to convert string in NCR fashion back to Unicode. Thanks.
-
There is no source data for this data record, Message FZ205
Hi Experts,
I am facing a problem with the DME File download. This problem happened all of sudden in our production system since last month and it was never before. Our system landscape has also not been changed but as per our basis consultant he has added two-three more new application server to the Production client. Even we do not have this problem in our testing clients.
Please note that we have been using the output medium '1' from the day one and thus the system has been generating the DME in 'File System' which we download on the desktop and upload the same to the bank online. After running the payment run when we trying to download the DME File, the system gives the error "There is no source data for this data record, Message FZ205".
I tried to fix this issue through many ways but not able to. So can you please let me know the reason of this error and solution to fix this.
With best regards,
BABAHi Shailesh,
Please share how you solved this problem.
Many Thanks,
Lakshmi -
Error during data load due to special characters in source data
Hi Experts,
We are trying to load Billing data into the BW using the billing item datasource. It seems that there are some special characters in the source data. When the record with these characters is encountered, the request turns red and the package is not loaded even into the PSA. The error we get in the monitor is something like
'RECORD 5028: Contents from field **** cannot be converted into type CURR',
where the field **** is a key figure of type currency. We managed to identify the said record in RSA3 on the source system and found that one of the fields contains some invalid (special) characters that show up as squares in RSA3. The data in the rest of the fields, including the fields mentioned in the error looks correct.
Our source system is a non-unicode system wheras the BW system is unicode enabled. I figure that the data in the rest of the fields are getting misaligned due to the presence of the invalid characters in the above field. This was confirmed when we unassigned the field with the special characters from the transfer rules and removed the source field from the transfer structure. After doing this the data was loaded successfully and the request turned green.
Can anyone suggest a way to either filter out such invalid characters from the source data or make some settings in the BW systems such that the special characters are no longer invalid in the BW system? We cannot write code in the transfer rules because the data package does not even come into the PSA. Is there any other method to solve this problem?
Regards,
TedHi Thad,
I was wondering, if system is unicode or non unicode that should not matter the amount and currency field . As currencies are defined by SAP and it is in pure English at least a currency code part of 3 Chars.
Could this because of some incosistency of data ??
I would like to know for Which currency had some special characters it it in that particular record ??
Hope that helps.
Regards
Mr Kapadia -
Using sqlldr when source data column is 4000 chars
I'm trying to load some data using sqlldr.
The table looks like this:
col1 number(10) primary key
col2 varchar2(100)
col3 varchar2(4000)
col4 varchar2(10)
col5 varchar2(1)
... and some more columns ...
For current purposes, I only need to load columns col1 through col3. The other columns will be NULL.
The source text data looks like this (tab-delimited) ...
col1-text<<<TAB>>>col2-text<<<TAB>>>col3-text
more-col3-text
more-col3-text
more-col3-text
more-col3-text
more-col3-text
END-OF-RECORD
There's nothing special about the source data for col1 and col2.
But the data for col3 is (usually) much longer than 4000 chars, so I just need to truncate it to fit varchar2(4000), right?
The control file looks like this ...
LOAD DATA
INFILE 'load.dat' "str 'END-OF-RECORD'"
TRUNCATE
INTO TABLE my_table
FIELDS TERMINATED BY "\t"
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
col1 "trim(:col1)",
col2 "trim(:col2)",
col3 char(10000) "substr(:col3,1,4000)"
I made the column 3 specification char(10000) to allow sqlldr to read text longer than 4000 chars.
And the subsequent directive is meant to truncate it to 4000 chars (to fit in the table column).
But I get this error ...
Record 1: Rejected - Error on table COL3.
ORA-01461: can bind a LONG value only for insert into a LONG column
The only solution I found was ugly.
I changed the control file to this ...
col3 char(4000) "substr(:col3,1,4000)"
And then I hand-edited (truncated) the source data for column 3 to be shorter than 4000 chars.
Painful and tedious!
Is there a way around this difficulty?
Note: I cannot use a CLOB for col3. There's no option to change the app, so col3 must remain varchar2(4000).You can load the data into a staging table with a clob column, then insert into your target table using substr, as demonstated below. I have truncated the data display to save space.
-- load.dat:
1 col2-text col3-text
more-col3-text
more-col3-text
more-col3-text
more-col3-text
more-col3-text
XYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
END-OF-RECORD-- test.ctl:
LOAD DATA
INFILE 'load.dat' "str 'END-OF-RECORD'"
TRUNCATE
INTO TABLE staging
FIELDS TERMINATED BY X'09'
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
col1 "trim(:col1)",
col2 "trim(:col2)",
col3 char(10000)
SCOTT@orcl_11gR2> create table staging
2 (col1 varchar2(10),
3 col2 varchar2(100),
4 col3 clob)
5 /
Table created.
SCOTT@orcl_11gR2> host sqlldr scott/tiger control=test.ctl log=test.log
SCOTT@orcl_11gR2> select * from staging
2 /
COL1
COL2
COL3
1
col2-text
col3-text
more-col3-text
more-col3-text
more-col3-text
more-col3-text
more-col3-text
XYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
1 row selected.
SCOTT@orcl_11gR2> create table my_table
2 (col1 varchar2(10) primary key,
3 col2 varchar2(100),
4 col3 varchar2(4000),
5 col4 varchar2(10),
6 col5 varchar2(1))
7 /
Table created.
SCOTT@orcl_11gR2> insert into my_table (col1, col2, col3)
2 select col1, col2, substr (col3, 1, 4000) from staging
3 /
1 row created.
SCOTT@orcl_11gR2> select * from my_table
2 /
COL1
COL2
COL3
COL4 C
1
col2-text
col3-text
more-col3-text
more-col3-text
more-col3-text
more-col3-text
more-col3-text
XYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
1 row selected.
Maybe you are looking for
-
Edit: Looks like I'm not sure how to delete a post so I'll just update with how I resolved the issue: by opening LV8.5 and running a .VI from this instance, the option to use 8.5 became available in the 'open with...' list! Howdy! Somehow over the w
-
I have upgraded to itunes 11.3.8. My library is showing, however I can't log onto itunes. I have logged in with my apple id, but it doesn't come up. Help? When I click on itunes store within itunes, the screen comes up blank. I'm using a macbook pro.
-
NullPointerException at sun.applet.AppletPanel.findAppletJDKLevel
Hi, we have a online HTML page containing an <applet> and different <iframe> definitions. By clicking a HTML link or calling Applet.getAppletContext().showDocument(), other HTML pages containing other applets are loaded into the referenced frame. All
-
J1IS - same material doc can be used twice
Hello Their is no check in the system for using the single material doc twice or n number of times in J1IS How can we control the same. Regards Niti Narayan
-
Why is there a solid white line on left "border" of iphone 4 display screen?
A solid white line appeared on the left of the screen a few hours ago, from top to bottom of my iphone4 display. It is almost as if the whole screen is off center by 1 mm. I have tried to restart hard-power off, even restored it. Help! Thank you in a