PDETextItem for utf-8 characters

hi i want retrieve text with "PDETextItemCopyText" standard characters copy well but utf-8 character like arabic characters
(ت ی چ خ ز ر س , ....)
are not
what can i do for solving this problem ?
CosObj kid1 = CosArrayGet(obj3,pageNumber);
CosObj cosst = CosDictGet(kid1,ASAtomFromString("Contents"));
  CosObj resou = CosDictGet(kid1,ASAtomFromString("Resources"));
PDEContent pddd=PDEContentCreateFromCosObj(&cosst,&resou);
PDEElement ele=PDEContentGetElem(pddd,i);
PDEText tText;
tText=reinterpret_cast<PDEText>(ele);
PDETextItem pdeti= PDETextGetItem(tText,f);
int lll=PDETextItemGetTextLen(pdeti);
buf= (ASUns8*)ASmalloc(lll+1);//(ASUns8*)malloc(lll+1);]
memset(buf,0,lll+1);
PDETextItemCopyText(pdeti,buf,lll);
memcpy((char*)tempref,(char*)buf,lll);
original text like this
what i can show after fetch text and reproduce pdf file like this
thanks for your help

is there any way to use  PDETextGetItem and save correct character?
or any other function to retrieve text ?
I cant find that api please share a web page link

Similar Messages

  • Any recommendations for an editor to help find invalid UTF-8 characters?

    I'm generating some XML programmatically. The content has some old invalid characters which are not encoded as UTF-8. For example, in one case there is a long dash character that I can clearly see is ISO-8859-1 in a browser, because if I force the browser to view in ISO-8859-1 I can see the character, but if I view in UTF-8 the characters look like the black-diamond-with-a-question-mark.
    BBEdit will warn me that the the file contains invalid UTF-8. But it doesn't show me where those characters are.
    If I try to load the XML in a browser, like Chrome, it will tell me where the first instance is of an invalid character, but not where all of them are. So I was able to locate the one you see in the screenshot and go in and manually fix that one entry.. But in BBEdit it again just shows a default invalid character symbol.
    What I'd like to be able to do are two things:
    (1) Find all invalid characters so I can then go and fix them all at once without repeated "find the first invalid character" fails when loading the XML in a browser.
    (2) Know what the characters are (rather than generically seeing a bad character symbold) so  I can programmatically strip them out or substitute them when generating the XML. So I need to know what the character values (e.g. the hex values) are for those characters so I can use the replace() method in my server-side JavaScript to get rid of them.
    Anybody know a good editor I can use for these purposes?
    Thanks,
    Doug

    Well, now BBEdit doesn't complain anymore about invalid UTF-8 characters. I've gotten rid of all as far as BBEdit is concerned. But Chrome and other browsers still report a few. I'm trying to quash them now, but because the browsers only report them one-by-one I have to generate the XML multiple times to track them down, which takes one hour per run.
    I think there are only a few left. One at line 180,000. The next at like 450,000. There are only about 600,000 lines in the file, so I think I'll be done soon. Still... it would be nice to have a Mac tool that would locate all the invalid characters the browsers are choking on so I could fix them in one sweep. It would save hours.
    Does anybody know of such a tool for the Mac?
    Thanks,
    Doug

  • RegExp for excluding special characters in a string.

    Hi All,
    Im using Flex RegExpValidator. Can anyone suggest me the correct expression to validate this condition?....
    I have tried this expression :----- /^[^///\/</>/?/*&]+$/...But in this it is also negating the alphabets.Also I have tried with opposite condition that in the String we should have alphabets and the expression is:-- ([a-z]|[A-Z]|[0-9]|[ ]|[-]|[_])*..... Please can anyone help me on this.
    Thanks in advanced to all.
    Munira

    sorry but you are posting things back that do not make any sense
    what do you mean with the below comment?
    munira06 wrote:
    Yes you are correct ,but I have tried this with single special character
    say
    Re: RegExp for excluding special characters in a string.
    here is a sample app taken from the live docs
    using ^[a-zA-Z0-9 \-_]*$ as the regex accepts all characters from a-z, A-Z, 0-9 - [space] and_
    run the example tell me what regex you are using and what test strings fail when they should pass or pass when they should fail
    <?xml version="1.0" encoding="utf-8"?>
    <!-- Simple example to demonstrate the RegExpValidator. -->
    <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
            xmlns:s="library://ns.adobe.com/flex/spark"
            xmlns:mx="library://ns.adobe.com/flex/mx">
        <fx:Script>
            <![CDATA[
                import mx.events.ValidationResultEvent;
                import mx.validators.*;
                // Write the results to the
                private function handleResult(eventObj:ValidationResultEvent):void {
                    if (eventObj.type == ValidationResultEvent.VALID) {
                        // For valid events, the results Array contains
                        // RegExpValidationResult objects.
                        var xResult:RegExpValidationResult;
                        reResults.text = "";
                        for (var i:uint = 0; i < eventObj.results.length; i++) {
                            xResult = eventObj.results[i];
                            reResults.text=reResults.text + xResult.matchedIndex + " " + xResult.matchedString + "\n";
                    } else {
                        reResults.text = "";
            ]]>
        </fx:Script>
        <fx:Declarations>
            <mx:RegExpValidator id="regExpV"
                    source="{regex_text}" property="text"
                    flags="g" expression="{regex.text}"
                    valid="handleResult(event)"
                    invalid="handleResult(event)"
                    trigger="{myButton}"
                    triggerEvent="click"/>
        </fx:Declarations>
        <s:Panel title="RegExpValidator Example"
                width="75%" height="75%"
                horizontalCenter="0" verticalCenter="0">
            <s:VGroup left="10" right="10" top="10" bottom="10">
                <s:Label width="100%" text="Instructions:"/>
                <s:Label width="100%" text="1. Enter text to search. By default, enter  a string containing the letters ABC in sequence followed by any digit."/>
                <s:Label width="100%" text="2. Enter the regular expression. By default, enter ABC\d."/>
                <s:Label width="100%" text="3. Click the Button control to trigger the validation."/>
                <s:Label width="100%" text="4. The results show the index in the text where the matching pattern begins, and the matching pattern. "/>
                <mx:Form>
                    <mx:FormItem label="Enter text:">
                        <s:TextInput id="regex_text" text="xxxxABC4xxx" width="100%"/>
                    </mx:FormItem>
                    <mx:FormItem label="Enter regular expression:">
                        <s:TextInput id="regex" text="ABC\d" width="100%"/>
                    </mx:FormItem>
                    <mx:FormItem label="Results:">
                        <s:TextInput id="reResults" width="100%"/>
                    </mx:FormItem>
                    <mx:FormItem >
                        <s:Button id="myButton" label="Validate"/>
                    </mx:FormItem>
                </mx:Form>
            </s:VGroup>
        </s:Panel>
    </s:Application>

  • UTF-8 characters (e.g À ) are not supporting in batch file

    I am trying to add user using batch file, I have used UTF-8 characters in username. I am trying below command net user Àdmin "sdf" /ADD /FULLNAME:Àdmin /COMMENT:"description" I observed that user added successfully but if i see local users
    then is shows ├Çdmin in place of Àdmin.
    If I run the above command manually on cmd it is working as per expected but it is not working if i execute this command using bat file.
    It looks like limitation of bat.
    I have gone though few forums and found that this could be the problem of DOS version.
    I am using 6.1.7600 DOS version.
    Could anybody help on this.
    Thanks,

    chcp 65001 helped me to run UTF-8 batches
    example for unicode character ⬥:
    chcp 65001
    C:\Tools\Code128\Code128Gen.exe 0 "aCLa" 2 D:\Out\128_N2T60_⬥CL⬥.bmp
    C:\Tools\Code128\Code128Gen.exe 0 "aCLAa" 2 D:\Out\128_N2T60_⬥CLA⬥.bmp
    Important:
    if your .bat file begins with BOM mark, remove it (switch the encoding to "UTF-8 without BOM") otherwise the interpreter will complain about first line of your batch, for example:
    C:\Tools\Code128>´╗┐chcp 65001
    '´╗┐chcp' is not recognized as an internal or external command,
    operable program or batch file.
    After some time, original UTF-8 batch file stopped working normally at commands which contained non-ascii characters. Commands were executed normally as before (producing correct output), but this misformatted message was shown at output of each:
    C:\Tools\Code128>C:\Tools\Code128\Code128Gen.exeThe system cannot write to the specified device.

  • How to validate UTF-8 characters using Regex?

    Hi All,
    In one of my applications, i need to include UTF-8 character set for validation of a certain string, which I am validating using a Regex.
    However, I do not know how to include UTF-8 characters in a Regex, or if at all, we can specify the UTF-8 charaters ina regex.
    Please Help!! Its Urgent!!!
    Thanks in Advance,
    Rajat Aggarwal

    Ok, Let me re-state my problem again, and exactly what i am looking for:
    I have an XML file with the following header: <?xml version="1.0" encoding="UTF-8"?>
    This XML file contains a tag, whose text is to be validate for a syntax : Operand operator Operand.
    Now, the operand on the right hand side of the operator could be a variable, or a string literal, which may contain some permissible special characters (as said above), and may or may not contain UTF-8 characters as well.
    I am using the xerces SAXParser to parse the XML document, and am retrieving the text of the elemnt tag with the method <code>element.getChildText("<tagName>")<//code>
    According to the org.jdom.Element API Docs,
    the getChildText() method is defined as follows:
    h3. getChildText{noformat}public java.lang.String getChildText(java.lang.String name){noformat}<dl><dd>Returns the textual content of the named child element, or null if there's no such child. This method is a convenience because calling <code>getChild().getText()</code> can throw a NullPointerException. <br<dd><dl><dt>Parameters: </dt><dd><code>name</code> - the name of the child </dd><dt>Returns: </dt><dd>text content for the named child, or null if no such child
    </dd></dl></dd></dl>
    Now, I am not sure if the String that I am reading is in UTF-8 Format. Is there any special way of reading a string in that format, or for that matter, convert a string to UTF-8 encoding?
    h3.

  • UTF-8 characters in database

    Hi there,
    I'm having problems with UTF-8 characters displaying incorrectly. The problem seems to be that the Content-Type HTTP headers have the character set as "Windows 1252" when it should be UTF-8. There is a demonstration of the problem here:
    http://teasel.homeunix.net/~rah/screenshots-unicode-db/apex-unicode-display.html
    I though I'd solved this previously, because I changed the nls_lang variable in wdbsvr.app to use single-quotes instead of double-quotes. After I did this, the server started sending Content-Type HTTP headers with UTF-8 in them and the characters in the example above displayed fine. For some reason, it seems to have reverted to sending "Windows 1252" and I don't know why.
    I noticed that the logs contain the following lines:
    [Wed Nov  1 09:50:45 2006] [alert] mod_plsql: Wrong language for NLS_LANG 'ENGLISH_UNITED KINGDOM.AL32UTF8' for apex DAD
    [Wed Nov  1 09:50:45 2006] [alert] mod_plsql: Wrong charset for NLS_LANG 'ENGLISH_UNITED KINGDOM.AL32UTF8' for apex DAD
    Any help getting it to send UTF-8 would be greatly appreciated.
    Thanks,
    Robert

    However I believe that it is correct to use double
    quotes rather than single quotes where the setting in
    the nls_lang contains space.Well, this is odd; using double quotes gives the same error, but with the double quotes instead of the single in the error message:
    [Thu Nov  2 09:43:42 2006] [alert] mod_plsql: Wrong language for NLS_LANG "ENGLISH_UNITED KINGDOM.AL32UTF8" for apex DAD
    [Thu Nov  2 09:43:42 2006] [alert] mod_plsql: Wrong charset for NLS_LANG "ENGLISH_UNITED KINGDOM.AL32UTF8" for apex DAD
    This implies that the parser is pulling the string out verbatim and including the quotes when it shouldn't. Lo and behold, removing any quotes:
    nls_lang = ENGLISH_UNITED KINGDOM.AL32UTF8
    causes the error to go away, and causes the HTTP headers to declare UTF-8; problem solved.
    I'm loathe to step ahead again and say that the docs need updating, but it certainly looks that way.
    Robert

  • Alogrithm for converting Unicode characters to EBCDIC

    I would like to know if there is any algorithm for converting Unicode Characters to EBCDIC.
    Awaiting your replys
    Thanks in advance,
    Ravi

    I would like to know if there is any algorithm for
    converting Unicode Characters to EBCDIC.Isn't ECBDIC a 7-bit code like ASCII. Unicode is
    16-bit. This means there is no way Unicode can be
    mapped on ECBDIC without loss of information. Link to
    Unicode,
    No. That is like saying that since UTF-8 is 8 bit based then it can't be mapped to UTF-16. But it does.
    EBCDIC either directly supports or has versions which support multibyte character sets. A multibyte character set can encode any fixed format sized character set. The basic idea is the same way UTF-8 works.
    Multibyte character sets have the added benifit that most of the data in the world is from the ASCII character set and the encodings always support that using only 8 bits. Thus the memory savings over UTF-16 (or UTF-32) are significant.

  • Printing UTF-8 text via command line (i.e. 10.5-ready enscript for UTF-8)

    Hello,
    (I asked this question in the Leopard discussion section, but was advised to use this forum.)
    I am using some terminal-based tools like Alpine and Vim, both with UTF-8 text encoding. Printing, however, is a problem, because the UTF-8 characters are not correctly printed. For Alpine I use enscript, but it does not support UTF-8. (Currently I convert UTF-8 to latin1 first using iconv before sending it to enscript, but special characters get lost that way.) For Vim I use CUPS-PDF, a virtual PDF printer, but also here UTF-8 characters does not make it (even though CUPS-PDF works for UTF-8 in Cocoa apps).
    Paps as an alternative to enscript may work, but I don't want to spend much time currently installing all necessary libraries.
    I am wondering if I can use Leopard's printing natively. *Is there a command line tool ready for OS 10.5 that accepts UTF-8 text and converts it to PS/PDF or sends it directly to a selected printer queue?*
    Thanks!

    I think you can use enscript for 8 bit code. You just have to set the option. if you check the users guide you may be able to use an option to honor 8 bit code. I haven't tried it, but my .enscriptrc looks like
    cat .enscriptrc
    # GNU Enscript configuration file.
    # Author: Stanley Williams
    # Local Version
    # How characters greater than 127 are printed:
    # 1 generate clean 7-bit code; print characters greater than 127
    # in the backslash-octal notation `\xxx' (default)
    # 0 generate 8-bit code
    Clean7Bit: 1
    # Default input encoding.
    # DefaultEncoding: mac
    # Default output media.
    DefaultMedia: Letter
    # Media definitions:
    # name width height llx lly urx ury
    Media: Legal 612 1008 24 24 588 984
    Media: Letter 612 792 36 36 576 756
    # Pass following Page Device options to the generated output.
    # Duplex printing.
    SetPageDevice: Duplex:true
    Hope this helps.

  • How to store UTF-8 characters in an iso-8859-1 encoded oracle database?

    How can we store UTF-8 characters in an iso-8859-1 encoded oracle database? We can NOT change the database encoding but need to store e.g. Polish or Russian characters besides other European languages.
    Is there any stable sollution with good performance?
    We use Oracle 8.1.6 with iso-8859-1 encoding, Bea WebLogic 7.0, JDK 1.3.1 and the following thin driver: "Oracle JDBC Driver version - 9.0.2.0.0".

    There are a couple of unsupported options, but I wouldn't consider using them on a production database running other critical applications. I would also strongly discourage their use unless you understand in detail how Oracle National Language Support (NLS) works, otherwise you could end up with corrupt data or worse.
    In a sense, you've been asked to do the impossible. The existing databas echaracter sets do not support encoding the data you've been asked to store.
    Can you create a new database with an appropriate database character set and deploy your application there? That's probably the easiest solution.
    If that isn't an option, and you really need to store data in this database, you could use one of the binary data types (RAW and BLOB), but that would mean that it would be exceptionally difficult for applications other than yours to extract the data. You would have to ensure that the data was always encoded in the same character set, otherwise you wouldn't be able to properly decode it later. This would also add a lot of complexity to your application, since you couldn't send or recieve string data from the database.
    Unfortunately, I suspect you will have to choose from a list of bad options.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • GetClob not working with UTF-8 characters

    Hi,
    I have a column with data type CLOB in a table in Oracle DB. I want to store
    and retrieve CJK(Chinese, Japanese, Korean) data. I have tried all the
    resultset functions provided by Oracle but I was not able to get UTF-8
    characters from CLOB column. Please let me know how can I get the UTF-8 data
    from a CLOB column in an Oracle DB.
    Thanks,
    Naval.

    Clob may be supporting unicode but isnt NCLOB specially for unicode!
    as the document "Migration to Unicode Datatypes for Multilingual Databases and
    Applications" says "Unicode datatypes were introduced in Oracle9i. Unicode datatypes are supported through the SQL
    NCHAR datatypes: NCHAR, NVARCHAR2, and NCLOB. (In this paper, “Unicode datatypes” refers
    to SQL NCHAR types.) SQL NCHAR datatypes have existed since Oracle8. However, in Oracle9i
    forward, they have been redefined and their length semantics have been changed to meet customer
    globalization requirements. Data stored in columns of SQL NCHAR datatypes are exclusively stored in
    a Unicode encoding regardless of the database character set. These Unicode columns allow users to
    store Unicode in a database, which may not use Unicode as the database character set. Therefore,
    developers can build Unicode applications without dependence on the database character set. The
    Unicode datatypes also make it easier for customers to incrementally migrate existing applications and
    databases to support Unicode."

  • Importing a thesaurus with UTF-8 characters in it?

    I have a *.syn file with UTF-8 characters in it (danish characters) and the file is encoded in UTF-8. I can import the thesaurus without any problems, and if I export it again it looks all good, it is encoded in UTF-8 and the danish characters looks good.
    However if I try and search with broaden or narrow I never get any hits when there are danish characters involved, but it works fine for ASCII characters.
    Maybe I should note that searching with danish characters without using the thesaurus works fine.
    Any ideas where I should look for my problem?
    Thank you
    Søren

    NLS_LANG is a system environment variable.
    See the Globalization Support Guide:
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96529/ch3.htm#5014
    How are you loading the thesaurus? Via the command line admin utility or via web services? If it's web services then setting the environment variable may not work.

  • ANSI for UTF-8

    Hi,
    We are recieving files as ANSI for UTF-8 and when we get this file from MQ JMS adapter we get the message: com.sap.aii.utilxi.misc.api.BaseRuntimeException:Invalid byte 2 of 2-byte UTF-8.
    If we manually convert file to UTF-8 it works ok.
    We are using CCSID 1143 adn we tried others to like 1208, 1140.
    Any help appriciated.
    Tony

    was a server problem, not SAP

  • SetMnemonic for non-english characters

    Does anybody knos how to set JButtons mnemonic for non-english characters?
    My mnemonic is loaded from a resource bundle, and in the documentation the setMnemonic(char) is only limited to english and it is written that the user should call setMnemonic(int) instead.
    So what value should this int contains in order to display the non-english char which is loaded from resource bundle?
    Thanks in advanve,
    Hanoch

    It seems that this is an issue that has popped up in various forums before, here's one example from last year:
    http://forum.java.sun.com/thread.jspa?forumID=16&threadID=490722
    This entry has some suggestions for handling mnemonics in resource bundles, and they would take care of translated mnemonics - as long as the translated values are restricted to the values contained in the VK_XXX keycodes.
    And since those values are basically the English (ASCII) character set + a bunch of function keys, it doesn't solve the original problem - how to specify mnemonics that are not part of the English character set. The more I look at this I don't really understand the reason for making setMnemonic (char mnemonic) obsolete and making setMnemonic (int mnemonic) the default. If anything this has made the method more difficult to use.
    I also don't understand the statement in the API about setMnemonic (char mnemonic):
    "This method is only designed to handle character values which fall between 'a' and 'z' or 'A' and 'Z'."
    If the type is "char", why would the character values be restricted to values between 'a' and 'z' or 'A' and 'Z'? I understand the need for the value to be restricted to one keystroke (eliminating the possibility of using ideographic characters), but why make it impossible to use all the Latin-1 and Latin-2 characters, for instance? (and is that in fact the case?) It is established practice on other platforms to be able to use characters such as '�', '�' and '�', for instance.
    And if changes were made, why not enable the simple way of specifying a mnemonic that other platforms have implemented, by adding an '&' in front of the character?
    Sorry if this disintegrated into a rant - didn't mean to... :-) I'm sure there must be good reasons for the changes, would love to understand them.

  • UTF-8 characters not displaying in IE6

    Dear Sirs,
    I have an issue in displaying UTF-8 characters in Internet explorer 6.
    I set up my all jsp pages encoding as UTF-8.
    Any language characters(like chinese,tamil etc) perfectly dispaying in firebox browser.
    But in internet explorer, the characters are not displaying.it displays like ?! ..
    Could any body help me out?
    Thanks
    mulaimaran

    Thanks Viravan,
    But, I have added this line in my jsp before html tag.
    <%@ page contentType="text/html;charset=UTF-8" pageEncoding="UTF-8" %>
    After html tag,i added this meta tag.
    <META http-equiv="Content-Type" content="text/html;charset=UTF-8">
    So, the UTF-8 encoding is capable to show different language characters in firebox browser.
    But In Internet Explorer 6 other language characters not displaying..
    > jsp sends out the UTF-8 BOM (hex: EF BB BF) before
    the HTML tag.I cant understand this line.I m new one to java.
    So ,please help me out.
    Thanks
    mullaimaran

  • PDF generation for Non English Characters from ADF

    Hi
    We are using below piece of code to generate pdf from ADF Managed bean. It works fine. However for non English Characters(eg. Japanese,Vietnamese,Arabic)  it puts
    I got few blogs
    https://blogs.oracle.com/BIDeveloper/entry/non-english_characters_appears
    However we are not using BI Publisher product . We are using its API's
    Can anyone tell where do we need to setup fonts within ADF or Weblogic or Server ?
    Input Parameters are
    a)xml Data
    b)InputStream  ie rtf Template
    import oracle.apps.xdo.XDOException;
    import oracle.apps.xdo.template.FOProcessor;
    import oracle.apps.xdo.template.RTFProcessor;
        public static byte[] genPdfRep(String pOutFileType,byte[] pXmlOut ,InputStream pTemplate)
            byte[] dataBytes = null;
            try {
                //Process RTF template to convert to XSL-FO format
                RTFProcessor rtfp = new RTFProcessor(pTemplate);
                ByteArrayOutputStream xslOutStream = new ByteArrayOutputStream();
                rtfp.setOutput(xslOutStream);
                rtfp.process();
                //Use XSL Template and Data from the VO to generate report and return the OutputStream of report
                ByteArrayInputStream xslInStream = new ByteArrayInputStream(xslOutStream.toByteArray());
                FOProcessor processor = new FOProcessor();
                ByteArrayInputStream dataStream = new ByteArrayInputStream((byte[])pXmlOut);  
                processor.setData(dataStream);
                processor.setTemplate(xslInStream);
                ByteArrayOutputStream pdfOutStream = new ByteArrayOutputStream();
                processor.setOutput(pdfOutStream);
                byte outFileTypeByte = FOProcessor.FORMAT_PDF;
                processor.setOutputFormat(outFileTypeByte); //FOProcessor.FORMAT_HTML
                processor.generate();
                dataBytes = pdfOutStream.toByteArray();
            } catch (XDOException e) {
                e.printStackTrace();
            return dataBytes;
    Appreciate your help.
    Thanks,
    Abhijit

    Fonts are defined in the template you use to generate the pdf. Your application add the data and both is processed yb the FOP processor. Now there are two possible causes of the '???' :
    1. the data you sent to the template contains the '???' already
    2. the template can't digest the data (the special characters) and puts '???' in the pdf.
    Before going on you have to find out which one is your problem. The 2nd is the problem you better ask this in a FOP forum as you have to solve it by changing the template.
    Timo

Maybe you are looking for

  • con:async-request-queue

    Hi,           I've created the jms-queue, which was described in the manifest.xml, between the <con:async-request-queue> and <con:async-request-error-queue> -tags.           I've a problem with the async-request-queue... When I send a message, the me

  • Mailbox - Drafts...shows 1, but no drafted messages appear.

    Looking for some assistance here...my Drafts mailbox in my email shows I have 1 drafted email.  However, when I highlight on the mailbox, no drafted email is shown.  How do I remove the notification showing I have a draft in that mailbox?  It seems t

  • I require a Human Task allowing a null, updatable, payload parameter

    Hello, I hope to get help here for I have been stuck for a while. I am developing a BPEL process for employee Onboarding. Several of the Human Task forms contain parameters, like Address2, MIDDLE_NAME, etc, which it is perfectly all right for the par

  • Scheduling in Discoverer 10g

    Hi, We are currently running Discoverer 10g plus. Is it possible to increment the parameters on the schedule similar to Oracle requests that give the option to 'Increment date parameters each run'? Thanks, Steve

  • DADs Parameters

    Are there DADs Parameters that enhance security in Oracle Portal?