Line Graph Issue - In BIP values are summed for same date

Hi
I have below data
Date-----------------------Sell
27-Mar-13----------------10
28-Mar-13----------------15
28-Mar-13----------------20
30-Mar-13----------------25
There are two entries on 28-Mar-2013. On BIP line graph; Sell values are added and graph show value 35 on 28-Mar-2013.
I want to show two different values on line graph for 28-Mar-2013.
Edited by: 1010402 on Jun 7, 2013 12:49 AM

Hi 913804
I am amazed you found the browser certification of OBIEE 11.1.1.6 I could not find it anywhere in the documentation nor from metalink personell.
Regarding the graphs not showing, we found this to be an issue with IE and not with Firefox. First make sure you have followed the upgrade step listed in
3.9.11.2 Updating Oracle Business Intelligence Catalogs from http://docs.oracle.com/cd/E23943_01/doc.1111/e16793/patch_set_installer.htm#PATCH250. Then the last resort was to recreate the graph component in each report from scratch. As you say pie charts are unaffected.
Regards,
Nick.

Similar Messages

  • EHP6 to EHP7 Upgrade Issue: Junk Values are showing for BW data source

    Hi,
      We are upgrading ECC from EHP6 to EHP7. We have custom data source which is build on custom Info set and Info set is built on Pooled table ZXXX. Datasource is showing the junk values in RSA3 after upgrade. I have re-generated the Info set and tested but no use. Below is the RSA3 data screen shot. Can anyone suggest me to oversome this?
    Thanks,
    Prasad

    Hi Prasad,
    Few things to check:
    1. Pooled table data for which the datasource is created.
    2. Reactivate the extract structure - check the dataelement also for date.
    3. Is there any custom code present in Infoset? - if yes then it may be possible that dates are being calculated in that custom code and wrong format is being used there.
    Thanks
    Amit

  • By default in BPC values are sum in node level, It does not make sense to sum averages on parent nodes.

    HI Guys
    we have an business requirement to store data in average figures ,
    The values  are entered on the base member level but in reports we need to show average values  on the parents level instead of sum
    ( by default in BPC values are sum in node level),it does not make sense to sum averages on parent nodes.
    we managed it with excel formulas ,but we r facing the problem with dimensions which are in context.
    (i tried using dim memeber formulas but AVG key word is not accepting the system in member formulas)
    at last by using formula we managed the report but im worried abt that if there is no formulas ,how we will manage
    pls suggest any solution to show average values on the node level in specific reports

    Hi Krao,
    Please define what do you mean by average - arithmetic average or weighted average (has more business sense to my mind).
    Also, please provide some samples, explaining dimensions used in average calculation.
    In our case we use dimension member formulas to show average prices, discounts etc. in the reports.
    B.R. Vadim

  • Using MISSING FIELD VALUES ARE NULL for external table

    I want to place a null for values missing in the sub_account field. Here is my external table:
    CREATE OR REPLACE DIRECTORY INCOMING_ORDERS_log_dir
    AS 'c:\starpubs\starpubs\dataformats\logs\INCOMING_ORDERS\log';
    CREATE OR REPLACE DIRECTORY INCOMING_ORDERS_bad_dir
    AS 'c:\starpubs\starpubs\dataformats\logs\INCOMING_ORDERS\bad';
    create table ext_INCOMING_ORDERS_table (
    Account varchar(5),
    Sub_Account varchar(1),
    Override_Code varchar(1),
    Nomenclature varchar(28),
    chg_nbr varchar(3),
    quantity integer,
    U_I varchar(5),
    zipcode varchar(5),
    type_reject varchar(2)
    organization external
    type oracle_loader
    default directory user_dir
    access parameters
    records delimited by newline
    missing field values are null
    badfile INCOMING_ORDERS_bad_dir:'INCOMING_ORDERS%a_%p.bad'
    logfile INCOMING_ORDERS_log_dir:'INCOMING_ORDERS%a_%p.log'
    fields
    Account(1:5) char(5),
    Sub_Account(7:7) char(1),
    Override_Code(10:10) char(1),
    Nomenclature(11:38) char(28),
    chg_nbr(40:42) char(3),
    quantity(44:48) integer external,
    U_I(50:54) char(5),
    zipcode(56:60) char(5),
    type_reject(61:62) char(2)
    location('PTCLICK.MANUAL.NOMEN.TXT','PTCLICK.ORDERS.TXT', 'EUR_RES.TXT', 'MQ.TXT', 'BPRO.TXT')
    reject limit unlimited;
    How can I place the MISSING FIELD VALUES ARE NULL for missing values for the sub_account?

    made the change I received this error:
    SQL> select * from ext_INCOMING_ORDERS_table;
    select * from ext_INCOMING_ORDERS_table
    ERROR at line 1:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    KUP-00554: error encountered while parsing access parameters
    KUP-01005: syntax error: found "no": expecting one of: "comma, date_format,
    defaultif, enclosed, ltrim, lrtrim, ldrtrim, notrim, nullif, optionally, ),
    rtrim, terminated"
    KUP-01007: at line 7 column 26
    CREATE OR REPLACE DIRECTORY INCOMING_ORDERS_log_dir
    AS 'c:\starpubs\starpubs\dataformats\logs\INCOMING_ORDERS\log';
    CREATE OR REPLACE DIRECTORY INCOMING_ORDERS_bad_dir
    AS 'c:\starpubs\starpubs\dataformats\logs\INCOMING_ORDERS\bad';
    create table ext_INCOMING_ORDERS_table (
    Account varchar(5),
    Sub_Account varchar(1),
    Override_Code varchar(1),
    Nomenclature varchar(28),
    chg_nbr varchar(3),
    quantity integer,
    U_I varchar(5),
    zipcode varchar(5),
    type_reject varchar(2)
    organization external
    type oracle_loader
    default directory user_dir
    access parameters
    records delimited by newline
    badfile INCOMING_ORDERS_bad_dir:'INCOMING_ORDERS%a_%p.bad'
    logfile INCOMING_ORDERS_log_dir:'INCOMING_ORDERS%a_%p.log'
    fields
    Account(1:5) char(5),
    Sub_Account(7:7) char(1) NO PRESERVE BLANKS,
    Override_Code(10:10) char(1),
    Nomenclature(11:38) char(28),
    chg_nbr(40:42) char(3),
    quantity(44:48) integer external,
    U_I(50:54) char(5),
    zipcode(56:60) char(5),
    type_reject(61:62) char(2)
    location('PTCLICK.MANUAL.NOMEN.TXT','PTCLICK.ORDERS.TXT', 'EUR_RES.TXT', 'MQ.TXT', 'BPRO.TXT')
    reject limit unlimited;

  • Value Mapping Replication for Mass Data - Performance Issues

    Hi All,
    We are looking into Value Mapping Replication for Mass Data. We have done this for less number of fields.
    Now we might have to have 15,000 records in the cache for the Value Mapping. I am not sure how this would effect the Java Cache and Java Engine as a whole.
    There might be a situation where we will have to leave the 15K records in the cache table on Java Engine...
    Are there any parameters that we can look into just to see how this hits the performance.
    Any links/ guidance in the right direction might help me..
    reg

    Naveen,
    Check jins reply in this thread (they have done with API and without API using graphical but still some issues):
    Value mapping performance using LookUp API
    ---Satish

  • Delete the records which have the Data values are NULL before load data to BPC model

    Hi Everyone,
    I am loading the data from flat file to BPC Model (10.0 Version).
    Source data (Flat file) looks like below:
    RP_Employee RPT_Currency Data
    Test                USD                   8
    Test1              USD
    Test2              USD                    6
    My user requirement is that to delete the records which have the Data values are NULL before load data to BPC model.
    So,Please let me know how can i meet to this requirement.
    I am thinking that,it is possible by using start routine BADi. If i am correct please let me know process like creation of class and BADi implementation.
    Thanks in advance!!

    Hi Nilanjan,
    Please see my source data below:
    Account      Client       Employee     Time                Data
    123              XYZ            Vishu            2014.01               300
    456                                                       2014.01
    789              ABC         Alexander      2014.02               200
    If you see the second record,
    If data value is ZULL,  then the Employee or another dimension is also NULL.
    So I want to delete second records.
    If it is Start routine please share the code and steps to do.
    Thanks in advance!!
    Regards,
    Viswanath

  • Alter mount database failing: Intel SVR4 UNIX Error: 79: Value too large for defined data type

    Hi there,
    I am having a kind of weird issues with my oracle enterprise db which was perfectly working since 2009. After having had some trouble with my network switch (replaced the switch) the all network came back and all subnet devices are functioning perfect.
    This is an NFS for oracle db backup and the oracle is not starting in mount/alter etc.
    Here the details of my server:
    - SunOS 5.10 Generic_141445-09 i86pc i386 i86pc
    - Oracle Database 10g Enterprise Edition Release 10.2.0.2.0
    - 38TB disk space (plenty free)
    - 4GB RAM
    And when I attempt to start the db, here the logs:
    Starting up ORACLE RDBMS Version: 10.2.0.2.0.
    System parameters with non-default values:
      processes                = 150
      shared_pool_size         = 209715200
      control_files            = /opt/oracle/oradata/CATL/control01.ctl, /opt/oracle/oradata/CATL/control02.ctl, /opt/oracle/oradata/CATL/control03.ctl
      db_cache_size            = 104857600
      compatible               = 10.2.0
      log_archive_dest         = /opt/oracle/oradata/CATL/archive
      log_buffer               = 2867200
      db_files                 = 80
      db_file_multiblock_read_count= 32
      undo_management          = AUTO
      global_names             = TRUE
      instance_name            = CATL
      parallel_max_servers     = 5
      background_dump_dest     = /opt/oracle/admin/CATL/bdump
      user_dump_dest           = /opt/oracle/admin/CATL/udump
      max_dump_file_size       = 10240
      core_dump_dest           = /opt/oracle/admin/CATL/cdump
      db_name                  = CATL
      open_cursors             = 300
    PMON started with pid=2, OS id=10751
    PSP0 started with pid=3, OS id=10753
    MMAN started with pid=4, OS id=10755
    DBW0 started with pid=5, OS id=10757
    LGWR started with pid=6, OS id=10759
    CKPT started with pid=7, OS id=10761
    SMON started with pid=8, OS id=10763
    RECO started with pid=9, OS id=10765
    MMON started with pid=10, OS id=10767
    MMNL started with pid=11, OS id=10769
    Thu Nov 28 05:49:02 2013
    ALTER DATABASE   MOUNT
    Thu Nov 28 05:49:02 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Trying to start db without mount it starts without issues:
    SQL> startup nomount
    ORACLE instance started.
    Total System Global Area  343932928 bytes
    Fixed Size                  1280132 bytes
    Variable Size             234882940 bytes
    Database Buffers          104857600 bytes
    Redo Buffers                2912256 bytes
    SQL>
    But when I try to mount or alter db:
    SQL> alter database mount;
    alter database mount
    ERROR at line 1:
    ORA-00205: error in identifying control file, check alert log for more info
    SQL>
    From the logs again:
    alter database mount
    Thu Nov 28 06:00:20 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Thu Nov 28 06:00:20 2013
    ORA-205 signalled during: alter database mount
    We have already checked in everywhere in the system, got oracle support as well without success. The control files are in the place and checked with strings, they are correct.
    Can somebody give a clue please?
    Maybe somebody had similar issue here....
    Thanks in advance.

    Did the touch to update the date, but no joy either....
    These are further logs, so maybe can give a clue:
    Wed Nov 20 05:58:27 2013
    Errors in file /opt/oracle/admin/CATL/bdump/catl_j000_7304.trc:
    ORA-12012: error on auto execute of job 5324
    ORA-27468: "SYS.PURGE_LOG" is locked by another process
    Sun Nov 24 20:13:40 2013
    Starting ORACLE instance (normal)
    control_files = /opt/oracle/oradata/CATL/control01.ctl, /opt/oracle/oradata/CATL/control02.ctl, /opt/oracle/oradata/CATL/control03.ctl
    Sun Nov 24 20:15:42 2013
    alter database mount
    Sun Nov 24 20:15:42 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Sun Nov 24 20:15:42 2013
    ORA-205 signalled during: alter database mount

  • Stock report with value and qauntity for given date not month wise

    Hi gems,
    can any body give me the standard report for Stock value and qauntity for given date not month wise at storage location level

    Hi
    check the report S_P00_07000139 with the option inventory and raw material report- detail and selection date (from, to date same). List will give opening & closing balances with goods movment and their values.
    Thanks

  • [SOLVED] Value too large for defined data type in Geany over Samba

    Some months ago Geany started to output an error whith every attempt to open a file mounted in smbfs/cifs.
    The error was:
    Value too large for defined data type
    Now the error is solved thanks to a french user, Pierre, on Ubuntu's Launchpad:
    https://bugs.launchpad.net/ubuntu/+bug/ … comments/5
    The solution is to add this options to your smbfs/cifs mount options (in /etc/fstab for example):
    ,nounix,noserverino
    It works on Arch Linux up-to-date (2009-12-02)
    I've writed it on the ArchWiki too: http://wiki.archlinux.org/index.php/Sam … leshooting

    An update on the original bug. This is the direct link to launchpad bug 455122:
    https://bugs.launchpad.net/ubuntu/+sour … bug/455122

  • 'Value too large for defined data type' error while running flexanlg

    While trying to run flexanlg to analyze my access log file I have received the following error:
    Could not open specified log file 'access': Value too large for defined data type
    The command I was running is
    ${iPLANET_HOME}/extras/flexanlg/flexanlg -F -x -n "Web Server" -i ${TMP_WEB_FILE} -o ${OUT_WEB_FILE} -c hnrfeuok -t s5m5h5 -l h30c+5 -p ctl
    Which should generate a html report of the web statistics
    The file has approx 7 Million entries and is 2.3G in size
    Ideas?

    I've concatenated several files together from my web servers as I wanted a single report, several reports based on individual web servers is no use.
    I'm running iWS 6.1 SP6 on Solaris 10, on a zoned T2000
    SunOS 10 Generic_118833-23 sun4v sparc SUNW,Sun-Fire-T200
    Cheers
    Chris

  • Anyone worked on Value Mapping Replication for Mass Data

    hi all,
    Is there any one who worked on <b>Value Mapping Replication for Mass Data</b> stuff.
    What is this?
    The on page 139 of the doc below tells about registereing
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/627d1cbc-0601-0010-aea2-c275521673f2
    Can any one explain guide me in that direction.
    reg,

    Naveen,
    Have a look at the weblog /people/sreekanth.babu2/blog/2005/02/23/value-mapping-replication
    Refer the below discussion too
    Re: How to edit roles for specific tasks in IR & ID?
    Best regards,
    raj.
    Message was edited by:
            Raj

  • Mkisofs: Value too large for defined data type too large

    Hi:
    Does anyone meet the problem when use mkisofs command?
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    Warning: creating filesystem that does not conform to ISO-9660.
    mkisofs 2.01 (sparc-sun-solaris2.10)
    Scanning iso
    Scanning iso/rac_stage1
    mkisofs: Value too large for defined data type. File iso/rac_stage3/Server.tar.gz is too large - ignoring
    Using RAC_S000 for /rac_stage3 (rac_stage2)
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    Thanks!

    An update on the original bug. This is the direct link to launchpad bug 455122:
    https://bugs.launchpad.net/ubuntu/+sour … bug/455122

  • OPMN Failed to start: Value too large for defined data type

    Hello,
    Just restared opmn and it failed to start with folloiwing errors in opmn.log:
    OPMN worker process exited with status 4. Restarting
    /opt/oracle/product/IAS10g/opmn/logs/OC4J~home~default_island~1: Value too large for defined data type
    Does anyone have ideas about cause of this error? Server normally worked more than 6 month with periodic restarts...

    Hi,
    You could get error messages like that if you try to access a file larger than 2GB on a 32-bit OS. Do you have HUGE log files?
    Regards,
    Mathias

  • 2d Flash Line Graph Issues

    Hi,
    I am editing the xml for the 2d flash line graph, and having difficulty performing some customisations.
    Could anyone assist with the following:
    1) Changing the default line colours. I have 4 series. Changing the following does not change the line colours:
    <line_chart left_space="5" right_space="5">
    <block_names enabled="no" />
              <dots radius="1"/>
              <lines size="1" auto_color="no" color="blue"></lines>
              <lines size="1" auto_color="no" color="red"></lines>
              <lines size="1" auto_color="no" color="green"></lines>
              <lines size="1" auto_color="no" color="pink"></lines>
    </line_chart>
    2) Is there any way to add commas (thousands) to the Y axis labels?
    Many thanks for any assistance with these
    Steve

    Hi Ben,
    Yes I was looking at that, but just discovered that I can do the following:
    <values show="no" thousand_separator="," />
    and it seems to do it.
    I am now having REAL issues with the X and Y axis formatting. I am including font nodes inside the y_axis / x_axis nodes (as per anychart 3 docs) but the formating properties are not changing the axis labeling:
    <!--
    <y_axis name="Water Injected (&P320_UNITS.)" smart="yes" direction="horizontal" rotation="-90" position="left_center" >
    <font size="20" type="Verdana" color="" bold="" italic="" underline="" align="left"/>
    </y_axis>
    <x_axis name="" smart="no" direction="horizontal" rotation="-90" position="top">
    <font size="20" type="Verdana" color="" bold="" italic="" underline="" align="left"/>
    </x_axis>
    -->
    do you have any experience with doing this?
    thanks again
    Edited by: Steve_Macleod on May 14, 2009 3:18 PM

  • Line Graph Question: Compare different values thru time ...

    Hi, hope someone can help me I've been looking at examples, but I cant get this to work:
    Here is a link to what the data looks like: http://primolandia.com/line_problem.jpg
    As you can see, I have a table with columns: Team, Date, CriteriaNum1, CriteriaNum2, CriteriaNum3
    One graph per Criteria which compares how each Team did on that Criteria thru time...
    So I want the X axis to have the "Date" and have one line per team showing the value at different times ...
    Any ideas?
    Thank you!!!
    Claudio

    I wish to withdraw this question ... I finally figured out my issue...

Maybe you are looking for

  • File in filesystem locked for deletion

    Hi, I have a problem with the deletion of a file from the filesystem. I have created a file with: File file = new File("download.csv"); then fill the file through a RFC call to the backend and a third party CSV Writer. Finally I want to transform the

  • Calling stored procedure with %rowtype as input

    Hi! Is it possible to call a pl/sql function that takes a table_name%rowtype as input with oracles jdbc implementation? Regards /Erik

  • Santa Rosa MBP Application Crashes with EXC_BAD_ACCESS (0x0001)

    I am having Application crashing problems with my brand new Santa Rosa MBP that arrived today. This machine is a 2.4 GHz 17" MBP with Hi-Res Glossy display and it came with 2 GB of RAM. I upgraded it to 4 GB but this problem occurs with the original

  • This content cannot be selected at this time.

    when I will go in to the store, I can see the special offers and so on, but when I shall go in to see all games, this comes up "This content cannot be selected at this time" it have been like this in several hours now, and I am tierd of it....I live

  • V 2.0 will not download

    I keep getting error -3259 (Timed Out) but my internet connection is fine. (I had to download the iTunes 7.7 stand alone version as the software upgrade wouldn't work either.) Can anyone help please? Joe