FLTP_CHAR_CONVERSION_FROM_SI  "Parameters" issue

Hi ,
I am trying to use this
function Module   "FLTP_CHAR_CONVERSION_FROM_SI" to convert Float to Char.
The parameters i am using are
CHAR_UNIT = KM
DECIMALS  = 15
Exponent  = 0
FLTP_VALUE_SI = 1.222222222222222E+06
Indicator VALUE = X
When I execute it , its giving me error message .
"please use number field for Input Value". If any one can tell me what I am using wrong. I really appreciate the time and full marks would be rewarded.
Thanks,
Mili-

Hi,
your input variable has to have some numeric type. Check this small example which works for me:
  DATA: l_string(20) TYPE c,
        l_float TYPE f VALUE '1.222222222222222E+06'.
  CALL FUNCTION 'FLTP_CHAR_CONVERSION_FROM_SI'
    EXPORTING
     char_unit              = 'KM'
     fltp_value_si          = l_float
     indicator_value        = 'X'
   IMPORTING
     char_value             = l_string
   EXCEPTIONS
     no_unit_given          = 1
     unit_not_found         = 2
     OTHERS                 = 3.

Similar Messages

  • Dynamic Regions and parameters - issue

    Jdev 11.1.1.6
    I have a main page that contains a dynamic region. I want to pass parameters to the region.
    Note that I have successfully created parameters for a NORMAL region inside a "parent" form. With a normal "child" region in a "parent",
    1. Open the "child" taskflow, click on the whitepace, go to overview, then paraemeters and define a parameter name and value, ex: InputParam and #{pageFlowScope.InputParam}.
    2. Open the "parent" form, click on the region, go to bindings, and edit the region. You see the parameter you specified, InputParam and you can enter a value. This value will come from the parent.
    3. Set refresh to if needed.
    However,
    With a dynamic region containing more than one "child" taskflow, I am getting an error. I do everything the same above.
    However, when I have the taskflow selected on the bindings tab of the parent form, and click on the structure window, parameter node, The ID text field is surrounded by orange, and a message displays that the reference to "InputParam" is not found.
    How are the name or id values specified for a dynamic region? How does one map a parameter from the parent to the "child"? Do you need to put this into a bean?
    Thanks,
    Stuart

    1) Parent A tasflow contains Child B taskflow.
    2) Child B expects a parameter called 'inputParameterChild'
    3) This will be defined inside the child B taskflow with 'java.lang.String' and value as '#{pageFlowScope.inputParam}'
    4) When you click the child B taskflow in parent A taskflow you will see the parameter expected by child B as 'inputParameterChild'
    5) Now you can pass the value as '#{pageFlowScope.inputParam}' , here you should have the inputParam set in the bean or in the parent taskflow as explained in step 2 and 3
    are you still facing the issue?

  • Hidden parameters issue.

    Hi
    Our reporting developers/management started to complain about their reports running slow after upgrade to 11.1.0.7. Not only is the upgrade teh only change, but they are moving their reports from a single standalone instance to a rac instance. This is for saving costs and get rid of the stand alone database.
    The users started complaining about rac and 11g. When we did further investigation we found that the user who runs the report has a logon trigger defined like this
    if user = 'REP_USER' then
    execute immediate 'alter session set "_b_tree_bitmap_plans" = true';
    execute immediate 'alter session set "_fast_full_scan_enabled" = true';
    execute immediate 'alter session set "_optimizer_cost_based_transformation"
    = linear';
    execute immediate 'alter session set "_sort_elimination_cost_ratio" = 0';
    execute immediate 'alter session set optimizer_features_enable = "10.2.0.4"'
    The default value for b_tree_bitmap_plans" is false and fast_full_scan_enabled is false too.
    But what made most difference is with _fast_full_scan_enabled. I set it back to false after logging in as 'REP_USER' and the reports start running better. The other parameters that they set in my opinion dont matter (I mean you can leave them as is is to their default values is probably the best). For some reason, they got this from a DBA who insisted that these should be set in 10g and it would improve performance.
    I saw a few Metalink notes which explicity mention about "_fast_full_scan_enabled" and they also mention that it should be set to false especially in a rac instance.
    My question is why is it important to set this to false especially in a rac instance (eventhough I agree that these should not be (_ parameters) set at all unless we know what we are doing exacctly).
    When I was playing around with this parameter at my session level, I noticed that this parameter does not make much difference in the performance of the report in a non-rac instance and makes a huge difference in a rac instance.
    Also, can anyone provide me sources where I can read about these init.ora parameters (I dont want to see a listing of init.ora parameters with _ in them. I have seen some web sites already have them :) ).
    Thank you
    MSK

    btree_bitmap_plans" = true
    I think this will tend to choose bitmap plan over btree like bitmap join operation etc.
    _fast_full_scan_enabled = true
    FFS for index but this oracle decide based on cost and if it find every column in your query in that index
    _optimizer_cost_based_transformation
    Costing for some complex function like view merge etc. I think this is by default true from 10g onward
    alter session set optimizer_features_enable = "10.2.0.4"
    calculate costing like 10.2.0,4.
    But main question is what is the impact of removing all this and running your queries. ? If it run slow show us plan and tkprof. It might be set to avoid some bugs but need to identify that

  • Parameters issue in discoverer while writing a view...its urgent please

    Hi Everybody,
    I have written a view in which i have a where condition which says something like
    where abc IN ( Select xyz from table2).
    Now in my desktop, i want a parameter to pass for the subquery as
    select xyz from table2 where column1='bla bla bla'
    please tell me how is this possbile.
    thank you
    kumar

    Kumar.
    1. Are you trying to show a LOV at run time? If so, you can create a LOV directly in the EUL via Disco Admin which will present a list of the items a user can choose from at run time (and not have to worry about a sub-query).
    2. If it's an actual sub-query you need, then first of all, Disco v4 (maybe it was v9, but that was an ugly child anyway) is the last version that supported 'pseudo' sub-queries.
    3. It's been written up in the forum many times about how to pass run time parameters to a folder in the EUL. There's one method that's popular for actually passing the parm to the folder.
    4. Another method is to create a worksheet that only calls a function, to kick off a PL/SQL routine that builds a table using your parameter and then have the main worksheet call that new table.
    So, if it's a LOV you want - that's a whole lot easier.
    Russ

  • SQL server Database - Command based Crystal reports parameters issue

    Hi,
    We are migrating Oracle to SQL server database and trying to point our crystal reports  to new database.
    In command based reports, the parameters are defined in the where clause like this
       (' All' in {?Country} or loc.country in {?Country} )
    On modifying the report script to ANSI standard, loc.country in {?Country} gets recognized by Crystal but   'All' in {?Country} does not work.
    I am pretty sure that it should be defined in a different way,but not able to figure out.
    Any suggestions?
    Thanks,
    Nithya

    Hi Nithya,
    What is the error message you receive?
    -Abhilash

  • DB Adapter Polling Parameters Issues

    Hi All,
    We have a DB Adapter which polls on a table at every 60 seconds.we set database rows for transaction to 1 in the wizard.
    but when 10 records inserted into the table,DB adapter is picking up all rows at a time. this is not the expected functionality.
    I want to poll each row in table after 60 seconds..
    Is there any Workaround or Property need to set
    Regards,
    Sudhakar.M

    Hi All,
    Issue resolved by setting the <property name="NumberOfThreads" value="1"/> value to 10
    Thanks

  • Smart Playlist Creation Question - Rule Parameters Issue

    I want to have newly imported songs go automatically into a playlist (Recently Added), from which I can move them into a playlist of their own. I then want to be able to CLEAR the Recently Added playlist so that I can then import a SECOND group of songs which will automatically go into the Recently Added playlist, where they will be the ONLY songs there because the first group of songs will have been CLEARED.
    I inadvertantly deleted my Recently Added playlist, so that is not available to me - I have to, in effect, create my own Recently Added playlist.
    The problem I am having has to do with the Smart Playlist Rule Parameters needed to accomplish what I am trying to do. I have been able to set up a smart playlist that receives newly imported songs automatically. However, I seem to be unable to CLEAR the songs from the smart playlist to make way for a SECOND group of newly imported songs.
    I thought I could get the job done by simply deleting the entire Smart Playlist that contained the first group of songs, and then creating a new Smart Playlist to receive the second group. You can do that - but the problem is that the minimum time you are allowed to assign is one day. This means that when the second group of songs goes into the second smart playlist, it will also "grab up" and include the FIRST group of songs, assuming that the groups of songs are all being imported on the same day. I don't want to limit myself to importing only one group of songs per 24-hour period.
    (When I import groups of songs, I import them in very, very large groups, which is why I want to be able to deal with them in individual groups, rather than having to sort through the larger, general Library of tunes in order to put them into their own playlists.)
    It seems to me the answer lies in being able to set up a Rules Parameter that shortens the time involved from one day to something like 15 minutes or so. I don't know how to do that, even assuming that it can be done.
    Or, perhaps, someone might have another suggestion that would help me accomplish my goal here.
    Any and all thoughts will be much appreciated!

    ed2345 wrote:
    You can use playlist membership as a criterion in a Smart Playlist.
    So if the list you are moving them to is "SecondPlaylist" (or whatever you wish to call it) then add a rule to your custom-tailored Recently Added that states "Playlist" "is not" "SecondPlaylist."
    Not sure I am getting this. You understand that I am IMPORTING new files into iTunes - I am not moving already imported files into a playlist.
    I can create a smart playlist with a Rule of: Playlist/is not/recently added (where "recently added" is the playlist I previously used to set aside the last group of songs imported into the music library). But when I click on OK to set it up, then I have to name it. And when I give it a name ("current load," for example) and confirm the name, suddenly, "current load" is filled with the last group of imported files. Don't ask me why - but that's what happened.
    Any other thoughts? Anyone?

  • RFC to SOAP parameters issue - Odd!

    All,
    We have a RFC to SOAP scenario.. There is a table parameter with 4 fileds in it - OPERATION, ID, TYPE, VALUE.
    And in RFC we pass the following in the table,
    OPERATION - "CHECK"
    ID - 001
    TYPE - MAT
    VALUE - 234567890
    Somehow, the incoming XML doesnt have TYPE and VALUE, but have them concatenated into ID.When we debug on the ABAP side, we se the internal table populated properly just before the call.
    But in XI - we see the incoming XML as,
      <?xml version="1.0" encoding="UTF-8" ?>
    - <rfc:Z_TEST xmlns:rfc="urn:sap-com:document:sap:rfc:functions">
      <EX_TAB />
    - <IM_TAB>
    - <item>
      <OPERATION>CHECK</OPERATION>
      <ID>001MAT2345</ID>
      <TYPE />
      <VALUE />
      </item>
      </IM_TAB>
      </rfc:Z_TEST>

    Hi rk,
    could it be that someone changed the coding of the RFC after you imported the RFC into XI?
    If so, reimport again.
    Regards Mario

  • Imac and Macbook  Pro reset parameters issue

    Looking for advice please.
    Both my Imac and Macbook pro have not resonded upon startup and I have had to reset the paramters on several occassions.
    Can anyone advise why this may have happened.  Both computers less than 4 months old.
    Thanks

    Sometimes files get corrupt or lose their permissions this is normal part of most operating systems. When you lose permissions to a file it will not let you read or write to that file.  This can cause problems with your system because to do work on your system you need to have access to read and write to some files. 

  • OCCI call PL/SQL Procedure with 2 IN/OUT Parameters..BUS ERROR!!

    Hi~ All,
    I am new user for using OCCI. Util now, I encountered a problem with OCCI when I call a procedure with 2 IN/OUT parameters. Then,I got an error(core dump), Why?
    CREATE OR REPLACE PROCEDURE demo_proc (v1 in integer, v2 in out varchar2, v3 in out varchar2);
    ==== OCCI Code ========
    stmt = conn->createStatement
    ("BEGIN demo_proc(:v1, :v2, :v3); END;");
    cout << "Executing the block :" << stmt->getSQL() << endl;
    stmt->setInt (1, 10);
    stmt->setString (2, "Test1");
    stmt->setString (3, "First");
    int updateCount = stmt->executeUpdate ();
    cout << "Update Count:" << updateCount << endl;
    cout << "Printing the INOUT & OUT parameters:" << endl;
    string c1 = stmt->getString (2);
    cout << c1 << endl;
    string c2 = stmt->getString (3);
    ==== RUN RESULT ====
    Bus error(coredump)
    But, If i just only got one of string from v1 or v2,I could get correct result!! Why? Does some one know how to avoid this two IN/OUT parameters issue?

    Hi~ Amoghavarsha,
    Thanks for your response! I solved the problem by myself. I found the root cause, it seems very strange in
    initializing environment variable.
    === FAILED ===
    Environment *env;
    === SUCCESS ===
    Environment *env = NULL; <---Why??
    cout << "occidml - createEnvironment" << endl;
    env = Environment::createEnvironment (Environment::OBJECT);
    Eventually, I fixed this issue in Win2K and HP-UX 11.
    Best Regards,

  • Java.sql.Blob method setBinaryStream?

    Hi,
    I've been trying to use the java.sql.Blob methods instead of the "Oracle Extensions" so that people w/o Oracle (using MySQL etc.) can still use my code.
    Problem: trying to get the OutputStream to write to the Blob. As of JDK1.4, java.sql.Blob has a method to do this, setBinaryStream. Unfortunately, calling this in Oracle JDBC (tried it in both thin and OCI version) throws an SQLException (Unsupported feature).
    Sample code:
    //Assume we already have a connection to the database, over either OCI or thin.
    //Assume a Statement stmt created from the connection.
    //Assume a table BlobTable with 2 fields : id (number) and data (blob).
    public void uploadBlob(byte [] theBytes){
         try{
         stmt.executeUpdate("INSERT INTO BlobTable (id, data) VALUES (1,empty_blob())");
         ResultSet rs = stmt.executeQuery("SELECT data FROM BlobTable WHERE id=1 FOR UPDATE");
         if (rs.next()){
    java.sql.Blob blob=rs.getBlob(1);
         OutputStream out=blob.setBinaryStream(0);
         //Next line never printed - error thrown.
         System.out.println("Got Stream");
         catch(Exception e){e.printStackTrace();}
    //End code
    Am I doing something wrong? Or is there simply no way to write to a Blob without using the extensions?
    None of the docs (examples, guides, etc) make any mention of this, although the JDBC dev guide does mention that the similar method in PreparedStmt only works over OCI.
    Thanks,
    Dan

    Hi lancea,
    It's been a while since this thread was active, but I have a related question.
    Is there a comprehensive list of changes between JDBC 3 and JDBC 4 that makes application not work any more, such as this setBinaryStream issue or the CallableStatement/Named parameters issue that we stumbled upon. We would like to address these issues proactively and not find out about them in production, since we're upgrading from jdk1.4 to jdk6. Oracle has provided us with their changes regarding database versions, but have been less forthcoming with JDBC spec version changes.
    Thanks in advance,
    Thomas Auzinger

  • Oracle 11 upgrade in suse 10sp2

    Hi Friends,
    I have IBM x3650 M3 Server in which i have
    OS Suse linux 10sp2
    Database oracle 10g (10.2.0.4.0)
    Application SAP BI 7.0
    I have requirement of upgrading the database to 11g.
    I have tried to do the upgrade in 10sp2 and face lot of kernel parameters issue which was not met the requirement of 11g.
    please suggest on which OS i can do the upgrade. ( Suse 10sp3 or 11 ).
    Thanks,
    Hariharan
    Edited by: 903462 on Dec 20, 2011 4:51 AM
    Edited by: 903462 on Dec 20, 2011 4:51 AM

    Hi,
    Following notes will help you
    Master Note of Linux OS Requirements for Database Server (Doc ID 851598.1)
    Certification Information for Oracle Database on Linux x86-64 (Doc ID 1304727.1)
    Support of Linux and Oracle Products on Linux (Doc ID 266043.1)
    Thanks,
    Krishna

  • Report not hitting pre seed cahe created using iBot by Admin

    Hi,
    We created an iBot to pre seed the cache using Administrator and it is creating the cache but when normal user run this particular report for the 1st time it's not hitting the cache(created using iBot). all the subsequest users request hitting the cache created by normal user. I still dont understand why it is not hitting the cache.
    Did anyone comeacross this issue before?
    what are the reasons not hitting cache with above scenario??
    does cache created by administrator using ibot work for other users? we actually running whole query with no filters on it using admin and normal users run the same report with some filters on it. as per oracle cache strategy it should work but it's not.
    any help appreciated.
    Thanks
    Jay.
    Edited by: Jay on May 16, 2012 7:23 PM
    Edited by: Jay on May 17, 2012 5:42 AM

    Hi Jay,
    I have given some inputs.
    The way the OBIEE server creates the SQL (both logical and physical) for a request can be a bit funky sometimes, the cache seeding only really works in fairly simple cases. Does your request have a pivot table in it by any chance? These are notorious for not caching properly, if you look at the log for the request you can see why as the server adds this strange "aggregate by" to the request (why this can't be done at the presentation level since the only change you are asling for is in the presentation of the content is beyond me). Those "aggregate by"s tend to stop a request being a cache hit unless it is identical to the one that seeded that cache, any change in parameters, columns etc (even if a logical subset) will not get a cache hit.
    2. "Oracle BI Server Cache" Cache Seeding option present in Destination tab of Delivers -- Please check this option.
    3. Caching is one of the many approaches to improve performance but it's not a magic solution. You need to understand that you can't cache everything. In particular you won't be able to cache reports that are driven by parameters and have facts that are too granular and exceed the number of rows each cache entry can have. If your fact is of a small size then you can get around the parameters issue by caching a report without any filters. The BI Server should able to derive subsequent queries as long as they meet the cache hit criteria. Have a look at the administration manual for all the rules a cache hit must meet.
    Hope it helps.
    Thanks,
    Satya

  • Cache Seed using iBOTs

    I have a requirement to use iBOTs- Cache Seeding
    The Dashboard is designed in a way that it has some global filters and then, multiple pages with reports.
    When ever user logs in, user selects the global filters and then goes to desired 'page->report'.
    Based on that, user gets the details.
    Now, I have the requirement for Cache Seeding for those reports using iBOTs. Using iBOTs I was able to get the report seeded in the Cache. But, for that report, I am not able to apply the global filter and hence the am not able to seed the required Cache results. I am clueless at this point on what to do. Can someone help me out in this.
    Thanks.

    Caching is one of the many approaches to improve performance but it's not a magic solution. You need to understand that you can't cache everything. In particular you won't be able to cache reports that are driven by parameters and have facts that are too granular and exceed the number of rows each cache entry can have. If your fact is of a small size then you can get around the parameters issue by caching a report without any filters. The BI Server should able to derive subsequent queries as long as they meet the cache hit criteria. Have a look at the administration manual for all the rules a cache hit must meet.

  • Performance issue possibly due to wrong parameters??

    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit.
    We have a program that runs every two weeks to process 3million records in our Oracle 10g database. Processing normally takes about 6 hours. With no change to the program (which is a java client program), the processing time has gotten longer and longer over the last few weeks. The processor on the database server was upgraded to iTanium and during this upgrade the databases were striped to fix a read/write issue that occurred due to poor configuration (it wasn't using multiple channels for processing so everything ran super slow) when the server hardware was upgraded. Since the last upgrade to the processor, we've noticed many errors being generated on the server when our program runs (java null reference errors). these errors never occurred before the upgrade. The upgrade was in July. In August we noticed the beginning of a degradation in performance - the process went from 6 hours to 10 hours. This month, it is taking 20 hours. Next month I fear it will be 40 hours.
    The program launches multiple sessions that work at once doing an update against the same table. The last time it ran it started at 6AM and by 1PM it was only 11% done. I looked at the session stats and saw the top 5 wait events:
    SQL*Net message from client-->
    totalwaits=11,997
    timewaited=2,070,070
    avgwait=172
    enq: TX - row lock contention-->
    totalwaits=587
    timewaited 65,614
    avgwait=111
    timeouts=180
    latch:cache buffers chains-->
    totalwaits=933
    timewaited=1,815
    avgwait=2
    db file sequential read-->
    totalwaits=1,426
    timewaited=1,519
    avgwait=1
    log file sync-->
    totalwaits=1,422
    totalwaited=2594
    avgwait=2
    It looks like all of these values are way too high and I'm wondering what parameters we could change on the database/server side that might improve performance in these areas.
    I read that increasing the INITRANS VALUE TO SOMETHING GREATER THAN ONE WOULD HELP WITH THE ROW LOCK CONTENTION AND TIMEOUTS.
    I also read that changing DB_CACHE_ADVICE OFF WOULD HELP WITH THE CACHE BUFFER CHAINS ISSUE.
    Are these viable solutions? Changing the program is not an option right now. Any help is greatly appreciated.

    Rakesh jayappa wrote:
    Hi,
    sorry i am not getting your question, i am guessing your question, you can reduce the log file sync by
    COMMIT WRITE BATCH;
    COMMIT WRITE IMMEDIATE;
    or
    The disks or I/O subsystems where the redologs are placed may be too busy.
    - Reduce other I/O activity on the disks containing the redo logs, or use dedicated disks.
    - Move the redo logs to faster disks or a faster I/O subsystem.
    - Move the redolog files from RAID 5 devices. RAID 5 is not efficient for writes.
    - Alternate redo logs on different disks to minimize the effect of the archiver on the log writer.
    Kind Regards,
    Rakesh JayappaMy point is that even if you eliminate it completely you have only eliminated a tiny fraction of the total wait time -- it may look like low hanging fruit, but it's a very small piece of fruit indeed.

Maybe you are looking for

  • Why do I have duplicate App Icons on my new iPod Touch?

    Just got the new 5th gen iPod Touch yesterday and already LOVE it--just couldn't be happier, except for one thing: Installed all of my apps last night, and for some reason there are duplicate icons for three of them--GetGlue, Google+, Google Voice, a

  • Mapping Adapter to a process task in process definition.

    Hi Guys My requirement is , Disable users associated with an organization when the organization is disabled. I have performed the following steps: 1. In design console Lookup definition Lookup.ACT_PROCESS_TRIGGERS , I added a row ACT_DISABLED   Disab

  • Can't import/copy photos from disk

    Hi, My friend used her PC to make me CDs of hundreds of photos from out trip to Spain. When I pop in the disk into my MacBook Pro I can see all the images perfectly fine in my Finder but I can't figure out how to copy them or import them to iPhoto or

  • IMG Transactions in Production Environment

    Hi all you SAP experts I would dearly like to know how you handle the execution of IMG transactions like OB52 (Opening and closing of accounting periods), OB83 (Tax Rates) etc. in your production environment. We currently "Open" the system for the bu

  • Remove prefix ns from target xml

    Hi Guys, I have a target file that is as follows <?xml version="1.0" encoding="UTF-8"?> <ns:MT_Case1 xmlns:ns0="urn:Case:Case1"> <Field1></Field1> <Field2></Field2> <Field3></Field3>    < ns /MT_Case1> How can I remove the pre fix ns from the target