Inverse prediction order 2 or 3 polynomials

Any idea on how to compute, using LabVIEW, the inverse prediction of X from a given Y using a fit of order 2 or order 3 polynomials ?
order 2          Y = a + b*X + c*X**2
order 3          Y = a + b*X + c*X**2 + d*X**3
Regression coefficients a,b,c & d are obtained using the "Levenberg-Marquardt algorithm " as implemented in the LabVIEW Nonlinear Lev-Mar Fit.vi
X acceptable area will be given manually by the user to select between different solutions.
Any example code would be very wellcome
Thanks in advance

Here's a quick draft of what I had in mind. Seems to work well.
Modify as needed.
Message Edited by altenbach on 09-05-2007 01:30 PM
LabVIEW Champion . Do more with less code and in less time .
Attachments:
PolynomialRoots.vi ‏42 KB
PolynomialRoots.png ‏42 KB

Similar Messages

  • Pool : What is the assigment order ?

    Hi,
    I do not undersand the assigment order for IP pool, Mac @ Pool, WWN pool....
    Sometime it seems logical :
    for example : MAC pool 00:25:B5:41:0A:00 to 00:25:B5:41:0A:3F, the server 1_nic1 get 00:25:B5:41:0A:3F, the server 1_nic2 get 00:25:B5:41:0A:2F...
    Sometime it seems not logical :
    server 2_nic1 get 00:25:B5:41:0A:3D, server 2_nic1 get 00:25:B5:41:0A:0E
    I just could see right now on UCS 2.1 we have Assignment order oprtion : Default or sequential
    What is the difference ? what is the best practice ?
    many thx in advance.
    Nicolas.

    Come now - Everything has structure & order in the world!
    The default (and only behavior pre-UCSM 2.1) order for assigning resources used reverse lexicographic order.  Someone in engineering thought that would be an effecient way to allocate resources, however in the real world its much more valueable to use a sequential ordering without having to calculate it what the order may be.
    There's no best practice to use either, however if you want to do any pre-zoning, using sequential you can pre-zone "blocks" of WWPNs ahead of time making your service profile deployment easier.  With the default method unless you calculate the ordering people were having to pre-zone everything cause they weren't sure which values were going to be allocated next.
    It's simply a choice of visually predictive ordering vs. calculated ordering.
    Regards,
    Robert

  • Frames on reports (changing the order of printing)

    Hi OTN,
    I have 2 frames: frame1 and frame2.
    I build a report in a way that frame1 appears first and then frame2.
    What I want is to inverse the order of the frames(frame2 appears and then frame1) without rebuilding my report.
    Thank u in advance

    HI THERE
    I DONT THINK IT IS POSSIBLE 2 CHANGE THE ORDER OF FRAMES AS SUCH PROBBLY BY LOGICALLY
    SOLVING I THINK IT CAN BE WORKED OUT .PLEASE SEND ME THE PROBLEM IN DETAIL

  • About toUpperCase() and toLowerCase()

    String x =
    (any String)
    Do you think x.toUpperCase().toLowerCase() equal to x??
    Do you think x.toLowerCase().toUpperCase() equal to x??
    Is there any method to ensure that after upper-to-lower conversion, then lower-to-upper conversion, the converted String is equal to the original String??
    Is there any method to ensure that after lower-to-upper conversion, then upper-to-lower conversion, the converted String is equal to the original String??

    Since both toUpperCase() and toLowerCase() are many-1 mappings,
    neither have an inverse.
    The API documentation for String refers the reader to that of Character
    to explain what toUpperCase() and toLowerCase() do (in a given
    locale). Character refers you on to www.unicode.org
    Having a look at http://www.unicode.org/versions/Unicode4.0.0/ch05.pdf
    we read
    Reversibility
    It is important to note that no casing operations are reversible. For example:
    toUpperCase(toLowerCase(�John Brown�)) � �JOHN BROWN�
    toLowerCase(toUpperCase(�John Brown�)) � �john brown�
    There are even single words like vederLa in Italian or the name McGowan in English, which
    are neither upper-, lower-, nor titlecase. This format is sometimes called inner-caps, and it
    is often used in programming and in Web names. Once the string �McGowan� has been
    uppercased, lowercased, or titlecased, the original cannot be recovered by applying another
    uppercase, lowercase, or titlecase operation. There are also single characters that do not
    have reversible mappings, such as the Greek sigmas.(5.18 "Reversability")
    That document goes on to explain how word processors might deal
    with case conversion in a way that allows the original string to be
    recovered. Perhaps something like they suggest (basically, save the
    original string and apply the various case transformations in a
    predictable order) is what you are looking for.

  • Shuffle Doesn't Play All Songs?

    I've set up 2 playlists that I like to jog with.  I don't want a predictable order in which the songs play so I select shuffle.  Oops, let me further explain, I run with an app called Runkeeper, its quite popular.  When I go on a jog and record the jog with Runkeeper, it lets me select a playlist which will start when I start recording the jog.  Also, in Runkeeper, I can turn Shuffle on, which I always do.
    When I looked at the playlists recently I realized there are some songs that I've only heard maybe once, twice, or not at all.  I've been using the playlists since January and run three times a week.  The playlists are long enough so that I don't go through the entire list in one run.
    It seems to me that Shuffle is not doing a very good job at randomizing the song selection.  I'm hearing too many of the same songs and very little of others.  Is there anything I can do to improve the Shuffle selection of songs?  Not having a set order of songs really helps to keep the music fresh.

    jangell2 wrote:
    In reference to ckuan:
    I thought about it, but haven't tried it yet.  I would mean starting the list, then popping into Runkeeper and starting the jog.  I'm not sure how many runs it would take to verify it was working.
    Work around tip:
    How about making a few shorter playlists that fit the time of your run and choose a different one for each run?

  • Performance of a report in Web Analysis??

    Hi Guys,
    •How do u improve the performance of report ?
    •In a general store 10 products are there, In that only 8 products are sold. I need a report for the products which are not sold(for 2 not sold). How can u do that report?(apart from Product normal dimension are there,we have a data warehouse for all)
    Thanks

    Hi, I agree with Gerd - there's not much tuning you can do in Web Analysis (as compared to FR).
    In looking at the query see if you can:
    - move single hierarchy members into the POV
    - have children and parent in the same row
    - arrange dimensions in inverse outline order (faster for essbase retrieves)
    - remove excessive formatting/suppression (or conditional formatting/suppression)
    - push report calculations back to the data source
    We have also found that using lots of dynamically calculated members also slows down reports, so try and limit the number of these.
    Maybe build your report up from the basic report to see which dimensions are causing the issue. Or alternatively start with your report and remove columns ;-)
    With regard to wishing to show only 2 items, you could add advanced conditional suppression to suppress rows with values.

  • Web Analysis Report performance

    Hi All,
    We are performing a drill to bottom level action by double click the account dimension in Web Analysis and SmartView. It will returns 9000+ rows records. In WA, it takes 160 seconds but SmartView only takes 20 seconds. We tried to increase the Java Heap Size from the WA Application Server and Java Plug-In Client but doesn't help much. Any idea why their is a big difference in performance? From the application log file in Essbase, we can see WA and SmartView are talking to Essbase in two different syntax.
    Thanks,
    Manoj

    Hi, I agree with Gerd - there's not much tuning you can do in Web Analysis (as compared to FR).
    In looking at the query see if you can:
    - move single hierarchy members into the POV
    - have children and parent in the same row
    - arrange dimensions in inverse outline order (faster for essbase retrieves)
    - remove excessive formatting/suppression (or conditional formatting/suppression)
    - push report calculations back to the data source
    We have also found that using lots of dynamically calculated members also slows down reports, so try and limit the number of these.
    Maybe build your report up from the basic report to see which dimensions are causing the issue. Or alternatively start with your report and remove columns ;-)
    With regard to wishing to show only 2 items, you could add advanced conditional suppression to suppress rows with values.

  • Time out error while running the FR report on Workspace

    Hi,
    We are working on Hyperion system 9.3.1 financial reporting with Essbase as a data source.
    We have built a report, which has many computation, conditional formatting, suppression condtion.Report has around 100 columns.
    Now when we are try to open this report on workspace, report is throwing the "Server time out" error meassage. error meassage is-
    "The requested item could not be loaded by thr proxy. Timed out".
    Reports are opening fine with less data or if essbase cube doesn't have any data.
    Please suggest us on this issue. Do we need to set any setting at Essbase level or workspace level or any other way.
    Any help will be appriciated.
    Thanks & Regards,
    Mohit Jain

    I would suggest by saying that 100 columns seems quite a lot, you mention there are many computation columns, these should be pushed back onto the server to calculate.
    We have also experienced issues where reports we pulling back lots of columns that were dynamically calculated.
    To narrow down the actual problem you can try the following, the see if the report runs with the change, if not continue with the next steps:
    - remove all formatting from the spreadsheet;
    - remove all calculations from the spreadsheet;
    - see if the report can be built more efficiently, by moving items to Grid POV or using same as, or reordering dimensions in inverse outline order;
    If it still doesn't run, then try removing columns until it can run.
    Cheers, Iain

  • IPhoto library on hard drive listing by roll

    For some reason my iPhoto library is now organizing all of my photos by roll. I am not talking about how they are viewed from within iPhoto (I know that can be controlled under view/sort photos), but how they are saved on my hard drive. It may be associated with an upgrade to iPhoto 6, but I'm not sure of the timing.
    I used to be able to find a photo on the hard drive (for example to upload) by going to the library, choosing the folder with the year, then with the month, the day, and finally the actual photo. Now, (since April) they are in folders labeled with the roll number only - it is almost impossible to figure out where anything is! Everthing before then is still organized by date. Can't seem to find anything in preferences. Any suggestions?
    G4   Mac OS X (10.4.8)  

    Even for uploading to photo printing sites (other than Apple) don't you need to upload the files from outside iP? They prompt me to find the files to upload -- don't I have to go to the files on the hard drive? >
    In this case, it would be recommended to select all the pics you want in iPhoto and then use File>Export>File Export to put the photos on your desktop. Then in the other app just navigate to the desktop files, and delete them when you have finished.
    But as I said before, if you want your library structure to make sense to you, organizing the Film Rolls in iPhoto will do that. iPhoto 6 keeps those files in folders named after your Film Rolls, so the neater your film rolls, the neater the structure will be. If you liked having your pictures stored by date, then you can rename your film rolls based on dates. Or name them by client, project, etc. In Finder, they will list in alpha-numeric order, rather than the chronological order they list in inside of iPhoto. So maybe name each film roll "YYYY-MM-DD-projectname" so they will be listed in a predictable order.
    If you have a large library, this may be a bit of a project. The calendar button at the bottom left of the iPhoto window can help you find all of your photos by a certain year and month. Once you find the files, you can select all, then create new film roll from selection, rename the film roll (in Film Roll view), and repeat for the next month. The good news is that once you have well organized film rolls, you will not have to change them again. And that you have control over how your pictures are stored, as long as you go about this organization via the iPhoto application.
    Don't hesitate to post back if I can further clarify anything for you.

  • OS X freezes on use of apple+tab

    I have seen this problem now several times, I do something in one app and want to jump to the next hit apple+tab and the Computer freezes with the little application menu in the middle of the screen. The mouse still moves, it has the form of the hand as in the apple-tab menu but nothing else works. Is that a known problem?

    Nobody here would know if Apple is going to change this unless Apple has annouced it. We're all just users here except for a new Apple employees to keep things flowing and a few to supply links to existing Apple help docs.
    Did Apple ever say that the AirPlay Mirroring list was supposed to be in any particular order? If not, they may not think you are seeing a bug, just a difference between two code bases.
    I can see how it would be handy in your particular use case to have them listed in a predictable order. Most of us see only a handful of candidates, so it doesn't matter much.
    You should submit this as a feature request (or a bug report, if you will) through official channels.

  • Hyperion System 9.3.1 Services are getting hang... on AIX box.

    Dear All,
    We have installed & configured Hyperion System 9.3.1 on IBM AIX 5.3 ML-6, 6 dual core CPU & 24GB RAM. Repository database is Oracle 10gR2.
    We have scheduled approx 300 Interactive Reporting Jobs in Workspace which has been associated with external trigger events. Once these jobs get executed their output has been imported in the various folders on Workspace.
    We are getting following errors in <HYPERION_HOME>/log/BIPLUS/ server_messages_IRJobService.log file when these jobs get executed:
    <message><![CDATA[Out of Memory. Advice: Close other applications or windows and try again.]]></message>
    And
    In <HYPERION_HOME>/BIPLUS/logs/ 0_bijobs_stderr.txt getting bellow error:
    *** PROCESS OUT OF MEMORY, EXITING ***
    *** PROCESS OUT OF MEMORY, EXITING ***
    SIGNAL SIGABRT Generated at Wed May 21 16:19:29 2008
    by thread 1287
    +++PARALLEL TOOLS CONSORTIUM LIGHTWEIGHT COREFILE FORMAT version 1.0
    +++LCB 1.0 Wed May 21 16:19:29 2008 Generated by IBM AIX 5.3
    +++ID Node 0 Process 25490 Thread 7
    ***FAULT "SIGABRT - Abort"
    +++STACK
    abort : 0x000000e8
    # At location 0xd015f47c but procedure information unavailable.
    write_string__16IT_CDR_OutStreamFPCc : 0x000001c0
    marshal_request__29IT_GIOP_ClientInterceptorImplFR16IT_CDR_OutStreamPQ2_10IT_Binding13ClientRequestUlT3sPQ2_6IT_IOR16ObjectKeyProfile21IT_GIOP_ResponseFlags : 0
    After this Hyperion Services are getting hang. In Workspace - View Job Status – showing that Jobs are executing.
    Any help would be highly appreciated.
    Regards,
    Manmohan Sharma

    The reason they run quicker the subsequent times, is because the data has already been cached in the system.
    You could try the usual tricks to speed the report up:
    - move items into POV
    - have children and parent in the same row
    - arrange dimensions in inverse outline order
    - remove excessive formatting
    - push report calculations back to the data source
    We have found that using lots of dynamically calculated members also slows down reports, so try and limit the number of these.
    Hope this helps. If not maybe give us an idea of how the report is created to see if other changes could be made.

  • Hyperion System 9.3.1 reports taking longer time for the very first time

    We are on Hyperion System 9.3.1. The Financial reports are taking longer time (like 2 to 3 minuter) for the very first time for each login. The subsequest reports are does work faster.
    The behaviour is same for the Production and Development environments.
    All the reporting services have given enough JVM heap size.
    FYI, Reporting and Workspace runngin on the same server. Workspace/Reporting are clusted in two servers. HFM app is running on different server. HFM web is on different server. Shared Services is also on running on different server.
    Any help would be greately appreciated.
    Thanks.

    The reason they run quicker the subsequent times, is because the data has already been cached in the system.
    You could try the usual tricks to speed the report up:
    - move items into POV
    - have children and parent in the same row
    - arrange dimensions in inverse outline order
    - remove excessive formatting
    - push report calculations back to the data source
    We have found that using lots of dynamically calculated members also slows down reports, so try and limit the number of these.
    Hope this helps. If not maybe give us an idea of how the report is created to see if other changes could be made.

  • Socket with different ports

    hello everyone!
    i have a java.net.ServerSocket listening on port 5000. "normal" users can connect to it, but those who sit behind a school/office/etc LAN can't use port 5000, only port 80 (and sometimes 443), so they can't connect.
    is there a way to make the client connect to server:5000 using local port 80?
    tnx

    gimbal2 wrote:
    georgemc wrote:
    Hint: you can run JMS over HTTP. Very interesting idea. I wonder how that would work in a multiplayer game. The JMS host would then be a sort of dedicated game host, passing along messages to keep clients synchronized. I'd have two instant questions coming to mind:
    - what would be the overhead of sticking JMS in there in stead of direct socket connections?"Not a massive amount" would be my guess. JMS needn't be heavyweight. If you're hand-rolling your own protocol you'll be juggling sockets and threads anyway. But it is just a guess. If someone shows me results that demonstrate otherwise, I'd be happy to concede it was a wrong guess.
    - would messages arrive in a predictable order?They can do, yes.
    I'm in half a mind to just give this a try in some multiplayer 2D game and see how it works out.I'd be interested to see what results you came up with.
    The main reason I suggested JMS over HTTP here, is that the OP needs to get round firewalls, and HTTP's become such a ubiquitous protocol pretty much for that reason, and combined with JMS, saves him a huge amount of hassle dealing with raw sockets.

  • Device attaches on Solaris 10

    Hello,
    We have a device driver that's been working on Solaris 7 and higher for some time. We're seeing different behavior on Solaris 10 that is causing us problems. We are running on Sparcs.
    Our driver has one pseudo device and one or more physical devices.
    Normally when the driver loads, the attach function is called for the pseudo device and the physical devices (in some not necessarily predictable order).
    This loading would usually take place when the pseudo device was opened, but could be due to the driver initially being installed or sometimes when the system boots.
    With Solaris 10, after rebooting the system, sometimes the pseudo device access only causes the pseudo device to be attached, and not the physical devices.
    Our driver is actually accessed by applications soley via the pseudo device, rather than by individual devices, so this causes a big problem for us.
    Is there anything we can do to cause the behavior to revert back to what it was before? Anything which causes the physical devices to all be attached would be fine, such as some call the attach for the pseudo device could make. Or something to put in our .conf file, etc.
    Thanks,
    Nathan

    Thank you for your response.
    Setting the "ddi-forceattach" property in Pseudo driver .conf file will not
    help. Solaris does not "attach" Pseudo drivers which do not have ".conf"
    children (even though the Pseudo driver conf file has "ddi-forceattach=1"
    property set). Opening the Pseudo device file will attach the Pseudo driver.I'm confused... We have a .conf file, as mentioned, but what makes
    it a "Pseudo driver .conf" rather than just a "driver .conf"?
    From what I undestand of your requirement, the following should be sufficient :
    1. Set property : "ddi-forceattach=1" for all physical devices that is
    required by Pseudo driver.
    2. Application opens the Pseudo device node.
    Let me know if you have any queries / issues. I do have further questions.
    Included below is a version of our .conf file modified to protect the
    names of the guilty.
    As you can see, there is part of it which defines a pseudo device,
    and then a set of properties that apply to all devices. Or that's the
    intention.
    In #1, you said to set the ddi-forceattach property for all "physical
    devices", but how do I do this, if it's not what I'm already doing? And what
    do you mean "required by Pseudo driver"?
    name="foobar" parent="pseudo" instance=1000 FOOBAR_PSEUDO=1;
    ddi-forceattach=1
    FOOBAR_SYM1=1
    FOOBAR_SYM2=2
    FOOBAR_SYM3=3;
    On a Solaris 9 system of mine, recently I believe I have seen multiple cases
    where I've booted, and a physical device has not gotten attached, but if I
    reboot, it will be attached the next time.
    Thanks,
    Nathan

  • DB_GET_BOTH_RANGE fails when there is only one record for a key

    Using the DB_GET_BOTH_RANGE flag doesn't seem to work when there is a
    single record for a key.
    Here's a sample Python program that illustrates the problem. If you
    don't know Python, then consider this to be pseudo code. :)
    from bsddb3 import db
    import os
    env = db.DBEnv()
    os.mkdir('t')
    env.open('t',
    db.DB_INIT_LOCK | db.DB_INIT_LOG | db.DB_INIT_MPOOL |
    db.DB_INIT_TXN | db.DB_RECOVER | db.DB_THREAD |
    db.DB_CREATE | db.DB_AUTO_COMMIT)
    data = db.DB(env)
    data.set_flags(db.DB_DUPSORT)
    data.open('data', dbtype=db.DB_HASH,
    flags=(db.DB_CREATE | db.DB_THREAD | db.DB_AUTO_COMMIT |
    db.DB_MULTIVERSION),
    txn = env.txn_begin()
    #data.put('0', '6ob 0 rev 6', txn)
    data.put('0', '7ob 0 rev 7', txn)
    #data.put('0', '8ob 0 rev 8', txn)
    data.put('1', '9ob 1 rev 9', txn)
    txn.commit()
    cursor = data.cursor()
    print cursor.get('0', '7', flags=db.DB_GET_BOTH_RANGE)
    cursor.close()
    data.close()
    env.close()
    This prints None, indicating that the record who's key is '0' and
    who's data begins with '7' couldn't be found. If I uncomment wither of
    the commented out puts, so that there is more than one record for the
    key, then the get with DB_GET_BOTH_RANGE works.
    Is this expected behavior? or a bug?

    You can use the DB_SET flag which will look for an exact key match. If
    you use it with the DB_MULTIPLE_KEY flag, it will return multiple keys
    after the point it gets a match. If you can post here how all your
    LIKE QUERIES look, I may be able to provide you with the suited
    combination of the flags you can use for them.I'm not doing like queries. I'm using BDB as a back end for an object
    database. In the object database, I keep multiple versions of object
    records tagged by timestamp. The most common query is to get the
    current (most recent version) for an object, but sometimes I want the
    version for a specific timestamp or the latest version before some
    timestamp.
    I'm leveraging duplicate keys to implement a nested mapping:
    {oid -> {timestamp -> data}}
    I'm using a hash access method for mapping object ids to a collection
    of time stamps and data. The mapping of timestamps to data is
    implemented as duplicate records for the oid key, where each record is
    an 8-byte inverse timestamp concatenated with the data. The inverse
    timestamps are constructed in such a way that they sort
    lexicographically in inverse chronological order. So there are
    basically 3 kinds of query:
    A. Get the most recent data for an object id.
    B. Get the data for an object id and timestamp.
    C. Get the most recent data for an object id who's timestamp is before
    a given timestamp.
    For query A, I can use DB->get.
    For query B, I want to do a prefix match on the values. This can be
    thought of as a like query: "like <inverse-time-stamp>%", but it can
    also be modelled as a range search: "get the smallest record that is >=
    a given inverse time stamp". Of course, after such a range search,
    I'd have to check whether the inverse time stamp matches.
    For query C, I really want to do a range search on the inverse time
    stamp prefixes. This cannot be modelled as a like query.
    I could model this instead as {oid+timestamp -> data}, but then I'd
    have to use the btree access method, and I don't expect that to scale
    as I'll have hundreds of millions of objects.
    We tried using BDB as a back end for our database (ZODB) several years
    ago and it didn't scale well. I think there were 2 reasons for
    this. First, we used the btree access method for all of our
    databases. Second, we used too many tables. This time, I'm hoping that
    the hash access method and a leaner design will provide better
    scalability. We'll see. :)
    If you want to start
    on a key partial match you should use the DB_SET_RANGE flag instead of
    the DB_SET flag.I don't want to do a key partial match.
    Indeed, with DB_GET_BOTH_RANGE you can do partial
    matches in the duplicate data set and this should be used only if you
    look for duplicate data sets.I can't know ahead of time whether there will be duplicates for an
    object. So, that means I have to potentially do the query 2 ways,
    which is quite inconvenient.
    As you saw, the flags you can use with cursor.get are described in
    detailed here.But it wasn't at all clear from the documentation that
    DB_GET_BOTH_RANGE wouldn't work unless there were duplicates. As I
    mentioned earlier, I think if this was documented more clearly and
    especially of there was an example of how one would work around the
    behavior, someone would figure out that behavior wasn't very useful.
    What you should know is that the usual piece of
    information after which the flags are accessing the records, is the
    key. What I advice you is to look over Secondary indexes and Foreign
    key indexes, as you may need them in implementing your queries.I don't see how secondary indexes help in this situation.
    BDB is
    used as the storage engine underneath RDBMS. In fact, BDB was the
    first "generic data storage library" implemented underneath MySQL. As
    such, BDB has API calls and access methods that can support any RDBMS
    query. However, since BDB is just a storage engine, your application
    has to provide the code that accesses the data store with an
    appropriate sequence of steps that will implement the behavior that
    you want.Yup.
    Sometimes you may find it unsatisfying, but it may be more
    efficient than you think.Sure, I just think the notion that DB_GET_BOTH_RANGE should fail if
    the number of records for a key is 1 is rather silly. It's hard for me
    to imagine that it would be less efficient to handle the non duplicate
    case. It is certainly less efficient to handle this at the application
    level, as I'm likely to have to implement this with multiple database
    queries. Hopefully, BDB's cache will mitigate this.
    Thanks again for digging into this.
    Jim

Maybe you are looking for

  • Print Size different than what is on screen

    I use Elements 9 and print to an Epson 7900, when I have an image 12 1/2" x 18" the actual print size is coming out 12 1/4" x18 any suggestions, It's getting really frustrating......

  • External cd burner questions...

    O.K. folks, my lame questions of the day. I have a G3 iBook (specs below) with cd read only. I have a Sony CRX 1600L firewire cd burner that I would like to use to back up my iTunes purchases. I used this a few months ago and somehow, (gulp) fried my

  • Strange tags in TaggedText File

    Dear,<br /><br />I was wondering if anybody here could help me with this.<br /><br />I'm trying to script the creation if Tagged Text files.  In my output however i get the following:<br /><br /><cLanguage:><cFont:><ParaStyle:5-titel><cLanguage:Engli

  • Concatenation in PNPCE

    hi,    Can any one help me out in concatenating name fields into a string in PNPCE.      Generally ,this way we do in PNP for character field    data:begin of it_output occurs 0,             name(35) type c,         end of it_output.     move-corresp

  • GRC Auto-Provisioning Behavior

    Ellow Experts, I am newbie in supporting GRC thus most of the errors encountered are crucial for me to resolve. I have some inquiries with regards to GRCu2019s behavior. 1. If a GRC Request has been created to assign roles with validity date earlier