How to extract multibyte characters?
Hi,
I have used acrobat plug in and could extract ascii characters to a text file.
I want to extract multi byte characters, Chinese, Korean.
Could you help me if you know how to do it?
Another question is that how could I debug plug in (by example, step by step) in IDE?
Sorry, this file is a correct one.
Attachments:
Test.vi 7 KB
Similar Messages
-
How to extract special characters from a field
Hi there.
I am a novice when it comes to Web Intelligence reports. I have been tasked with producing a final report that I can export to Excel which shows a project number and a project name. Very simple. Ultimately I need to import my final Excel file to a third party software. However, the issue is the project name field consists of special characters such as hyphens, parenthesis, asterisks, etc. I need to be able to create a formula which extracts all special characters and just leaves me with alpha-numeric characters and spaces. I also need to limit the character count to no more than 34 characters. I will not be able to import my final Excel file unless I can product those results. With the help of a very knowledgable person in the Crystal Reports forum, I was able to do that by using the following formula:
stringvar a:=left({Projects.ProjectName},34);
numbervar i;
Local StringVar fin;
for i:= 1 to len(a) do
if a[i]in ["a" to "z"] then
fin := fin & a[i]
else if a[i] in ["1" to "9"] then
fin := fin & a[i]
else if a[i] = " " then
fin := fin & a[i];
fin;
It worked amazingly well in Crystal Reports but this report now needs to move to Web Intelligence. Is there a way to do this in a Web Intelligence report? Itried but the formula is not recogizable. I tried creating a variable and using this formual but I couldn't get it to work. I appreciate all helpful responses. The version of Web Intelligence I am using is SAP Business Enterprise XI. I am working in a PC environment using Windows 7 if that helps at all.
Thank you!
LaurenHi Lauren, Please provide with some sample data...
In SQL, by writing some Functions/Procedures you can eliminate special characters. Ask your database team to do that.
or
Create an object with the following syntax... It Works in SQL.
= REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE
(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE
(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE
(REPLACE(REPLACE(REPLACE([String_Object],'&',' '),'*',' '),'(',' '),'^',' '),'%',' '),'!',' '),'@',' '),'#',' '),'$',' '),'~',' '),')',' '),'+',' '),'=',' ' ),'-',' '),'_',' '),'/',' '),':',' '),';',' '),',',' '),'<',' '),'.',' '),'>',' '),'?',' '),'"',' '),'''',' '),'[',' '),']',' '),'{',' '),'}',' '),'\',' '),'|',' ')
Example: Input: '~!@#$%^&*()_+{}:"<>-=[]./\|/*-+ascdfv123'
Output: ascdfv123 -
How to extract text from a PDF file?
Hello Suners,
i need to know how to extract text from a pdf file?
does anyone know what is the character encoding in pdf file, when i use an input stream to read the file it gives encrypted characters not the original text in the file.
is there any procedures i should do while reading a pdf file,
File f=new File("D:/File.pdf");
FileReader fr=new FileReader(f);
BufferedReader br=new BufferedReader(fr);
String s=br.readLine();any help will be deeply appreciated.jverd wrote:
First, you set i once, and then loop without ever changing it. So your loop body will execute either 0 times or infinitely many times, writing the same byte every time. Actually, maybe it'll execute once and then throw an ArrayIndexOutOfBoundsException. That's basic java looping, and you're going to need a firm grip on that before you try to do anything as advanced as PDF reading. the case.oops you are absolutely right that was a silly mistake to forget that,
Second, what do the docs for getPageContent say? Do they say that it simply gives you the text on the page as if the thing were a simple text doc? I'd be surprised if that's the case.getPageContent return array of bytes so the question will be:
how to get text from this array? i was thinking of :
private void jButton1_actionPerformed(ActionEvent e) {
PdfReader read;
StringBuffer buff=new StringBuffer();
try {
read = new PdfReader("d:/getjobid2727.pdf");
read.getMetaData();
byte[] data=read.getPageContent(1);
int i=0;
while(i>-1){
buff.append(data);
i++;
String str=buff.toString();
FileOutputStream fos = new FileOutputStream("D:/test.txt");
Writer out = new OutputStreamWriter(fos, "UTF8");
out.write(str);
out.close();
read.close();
} catch (Exception f) {
f.printStackTrace();
"D:/test.txt" hasn't been created!! when i ran the program,
is my steps right? -
How to extract data from custom made Idoc that is not sent
Hi experts,
Could you please advise if there is a way how to extract data from custom made idoc (it collects a lot of data from different SAP tables)? Please note that this idoc is not sent as target system is not fully maintained.
As by now, we would like to verify - what data is extracted now.
Any help, would be appreciated!Hi,
The fields that are given for each segment have their length given in EDSAPPL table. How you have to map is explained in below example.
Suppose for segment1, EDSAPPL has 3 fields so below are entries
SEGMENT FIELDNAME LENGTH
SEGMENT1 FIELD1 4
SEGMENT1 FIELD2 2
SEGMENT1 FIELD3 2
Data in EDID4 would be as follows
IDOC SEGMENT APPLICATION DATA
12345 SEGMENT1 XYZ R Y
When you are extracting data from these tables into your internal table, mapping has to be as follows:
FIELD1 = APPLICATIONDATA+0(4) to read first 4 characters of this field, because the first 4 characters in this field would belong to FIELD1
Similarly,
FIELD2 = APPLICATIONDATA+4(2).
FIELD3 = APPLICATIONDATA+6(2).
FIELD1 would have XYZ, FIELD2 = R, FIELD3 = Y
This would remain true in all cases. So all you need to do is identify which fields you want to extract, and simply code as above to extract the data from this table.
Hope this was helpful in explaining how to derive the data. -
IMPDP SQLFILE : multibyte characters in constraint_name leads to ORA-00972
Hi,
I'm actually dealing with constraint_name made of multibyte characters (for example : constrain_name='VALIDA_CONFIRMAÇÃO_PREÇO13').
Of course this Bad Idea® is inherited (I'm against all the fancy stuff like éàù in filenames and/or directories on my filesystem....)
The scenario is as follows :
0 - I'm supposed to do a "remap_schema". Everything in the schema SCOTT should now be in a schema NEW_SCOTT.
1 - The scott schema is exported via datapump
2 - I do an impdp with SQLFILE in order to get all the DDL (table, packages, synonyms, etc...)
3 - I do some sed on the generated sqlfile to change every occurence of SCOTT to NEW_SCOTT (this part is OK)
4 - Once the modified sqlfile is executed, I do an impdp with DATA_ONLY.
(The scenario was imagined from this thread : {message:id=10628419} )
I'm getting some ORA-00972: identifier is too long at step 4 when executing the sqlfile.
I see that some DDL for constraint creation in the file (generated at step#2) is written as follow :ALTER TABLE "TW_PRI"."B_TRANSC" ADD CONSTRAINT "VALIDA_CONFIRMAÃÃO_PREÃO14" CHECK ...Obviously, the original name of the constraint with cedilla and tilde gets translated to something else which is longer than 30 char/byte...
As the original name is from Brazil, I also tried do add an EXPORT LANG=pt_BR.UTF-8 in my script before running the impdp for sqlfile. This didn't change anything. (the original $LANG is en_US.UTF-8)
In order to create a testcase for this thread, I tried to reproduce on my sandbox database... but, there, I don't have the issue. :-(
The real system is an 4-nodes database on Exadata (11.2.0.3) with NLS_CHARACTERSET=AL32UTF8.
My sandbox database is a (nonRAC) 11.2.0.1 on RHEL4 also AL32UTF8.
The constraint_name is the same on both system : I checked byte by byte using DUMP() on the constraint_name.
Feel free to shed any light and/or ask for clarification if needed.
Thanks in advance for those who'll take on their time to read all this.
I decided to include my testcase from my sandbox database, even if it does NOT reproduce the issue +(maybe I'm missing something obvious...)+
I use the following files.
- createTable.sql :$ cat createTable.sql
drop table test purge;
create table test
(id integer,
val varchar2(30));
alter table test add constraint VALIDA_CONFIRMAÇÃO_PREÇO13 check (id<=10000000000);
select constraint_name, lengthb(constraint_name) lb, lengthc(constraint_name) lc, dump(constraint_name) dmp
from user_constraints where table_name='TEST';- expdpTest.sh :$ cat expdpTest.sh
expdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp tables=test- impdpTest.sh :$ cat impdpTest.sh
impdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=testThis is the run :
[oracle@Nicosa-oel test_nonAsciiColName]$ sqlplus scott/tiger
SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 12 18:58:27 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> @createTable
Table dropped.
Table created.
Table altered.
CONSTRAINT_NAME LB LC
DMP
VALIDA_CONFIRMAÇÃO_PREÇO13 29 26
Typ=1 Len=29: 86,65,76,73,68,65,95,67,79,78,70,73,82,77,65,195,135,195,131,79,95
,80,82,69,195,135,79,49,51
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@Nicosa-oel test_nonAsciiColName]$ ./expdpTest.sh
Export: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:12 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/******** directory=scottdir dumpfile=testNonAscii.dmp tables=test
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "SCOTT"."TEST" 0 KB 0 rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
/home/oracle/scott_dir/testNonAscii.dmp
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 19:00:22
[oracle@Nicosa-oel test_nonAsciiColName]$ ./impdpTest.sh
Import: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:26 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_SQL_FILE_TABLE_01": scott/******** directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=test
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Job "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully completed at 19:00:32
[oracle@Nicosa-oel test_nonAsciiColName]$ cat scott_dir/test.sqlfile.sql
-- CONNECT SCOTT
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: TABLE_EXPORT/TABLE/TABLE
CREATE TABLE "SCOTT"."TEST"
( "ID" NUMBER(*,0),
"VAL" VARCHAR2(30 BYTE)
) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 COMPRESS FOR OLTP LOGGING
TABLESPACE "MYTBSCOMP" ;
-- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ALTER TABLE "SCOTT"."TEST" ADD CONSTRAINT "VALIDA_CONFIRMAÇÃO_PREÇO13" CHECK (id<=10000000000) ENABLE;I was expecting to have the cedilla and tilde characters displayed incorrectly....
Edited by: Nicosa on Feb 12, 2013 7:13 PMSrini Chavali wrote:
If I understand you correctly, you are unable to reproduce the issue in the test instance, while it occurs in the production instance. Is the "schema move" being done on the same database - i.e. you are "moving" from SCOTT to NEW_SCOTT on the same database (test to test, and prod to prod) ? Do you have to physically move/copy the dmp file ? Hi Srini,
On the real system, the schema move will be to and from different machines (but same DBversion).
I'm not doing the real move for the moment, just trying to validate a way to do it, but I guess it's important to say that the dump being used for the moment comes from the same database (the long story being that due to some column using object datatype which caused error in the remap, I had to reload the dump with the "schema rename", drop the object column, and recreate a dump file without the object_datatype...).
So Yes, the file will have to move, but in the current test, it doesn't.
Srini Chavali wrote:
Obviously something is different in production than test - can you post the output of this command from both databases ?
SQL> select * from NLS_DATABASE_PARAMETERS;
Yes Srini, something is obviously different : I'm starting to think that the difference might be in the Linux/shell side rather than on the impdp as datapump is supposed to be NLS_LANG/CHARSET-proof +(when traditional imp/exp was really sensible on those points)+
The result on the Exadata where I have the issue :PARAMETER VALUE
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET AL32UTF8
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_RDBMS_VERSION 11.2.0.3.0the result on my sandbox DB :PARAMETER VALUE
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET AL32UTF8
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_RDBMS_VERSION 11.2.0.1.0------
Richard Harrison . wrote:
Hi,
Did you set NLS_LANG also when you did the import?Yes, that is one of the difference between the Exadata and my sandbox.
My environnement in sandbox has NLS_LANG=AMERICAN_AMERICA.AL32UTF8 where the Exadata doesn't have the variable set.
I tried to add it, but it didn't change anything.
Richard Harrison . wrote:
Also not sure why you are doing the sed part? Do you have hard coded scheme references inside some of the plsql?Yes, that is why I choose to sed. The (ugly) code have :
- Procedures inside the same package that references one another with the schema prepended
- Triggers with PL/SQL codes referencing tables with schema prepended
- Dynamic SQL that "builds" queries with schema prepended
- Object Type that does some %ROWTYPE on tables with schema prepended (that will be solved by dropping the column based on those types as they obviously are not needed...)
- Data model with object whose names uses non-ascii characters
+(In France we use to call this "gas power plant" in order to tell how a mess it is : pipes everywhere going who-knows-where...)+
The big picture is that this kind of "schema move & rename" should be as automatic as possible, as the project is to actually consolidate several existing databases on the Exadata :
One schema for each country, hence the rename of the schemas to include country-code.
I actually have a workaround yet : Rename the objects that have funky characters in their name before doing the export.
But I was curious to understand why the SQLFILE messed up the constraint_name on one sustem when it doesn't on another... -
How to handle Multibyte character in Java urgent requirement
Hi Friends,
I'm fetching the data from the database(MS SQL Server 2000) and writing into the file.
This is the snippet of the code i'm using to write into the file.
File outputFile = new File("C:\\Temp\\SAMPLE.txt");
BufferedWriter fileWriter = new BufferedWriter(new FileWriter(outputFile));
//Fetching the values from database
while (rs.next())
//colData contains the value from the database which will go into the file
String colData = rs.getString(fieldNames);
fileWriter.write(colData);
The problem is when I'm writing a multibyte� �� character is converting into double quotes,
i.e. M?�L INC. is written as M?"L INC.
I need to know how can I write the same value what I'm fetching from the database( irespective it is singlebyte or multibyte characters ).
Thanks in advance.
Kind Reards,
PallaviHi,
I changed the default encoding, Windows1252 to UTF-16 using OutputStreamWriter class and it worked out!
OutputStreamWriter out = new OutputStreamWriter(new BufferedOutputStream(new FileOutputStream("C:\\abc.txt")), "UTF-16");
Thanks for the suggestions. :) -
Adobe AIR help; question regarding search criteria with multibyte characters
Hi,
I created Adobe AIR application with Robohelp 9 (using FM 10 files as source, and texts are written in Japanese and English),
and happend to find that search function in AIR application doesn't catch keywords correctly.
For example,
1. If you type "文字" and "スタイル" with single byte space in search window, the result appears for both "文字" and "スタイル".
2. If you type "文字" and "スタイル" with double byte space in search window, the result doesn't match for anything.
3. If you type "文字スタイル" (in one word) in search window, the result doesn't match for anything.
Same thing happens for the case "文字種" (literally, "文字"+"種", the meaning is almost the same).
But, if you type search words which is all in Katakana, the result seems to be fine.
Is there any limitation for multibyte characters support? Or, this behaviour is a feature??
If so, how can make AIR application "hit" correct words?
Thank you very much for your kind help in advance!On this one your best course of action is to contact Adobe Support. They will likely require your project and there is one thing I would suggest you do first. Create a new project with just a few topics to prove the problem exists there as well. If it does it will be a simpler upload and you will know the problem is repeatable.
See www.grainge.org for RoboHelp and Authoring tips
@petergrainge -
Extracting all characters of a postcode before the space
I am trying to extract the first characters from a postcode into a separate field that can then be used as a parameter. If i search on postcodes starting with (for example) M1, this is bring out not only M1 but also M11, M12, etc.
I have found a While Constructs formula example in my training book that extracts the Surname from a field containing the FirstName and Surname. I have recreated this and applied to my postcode field but it is bring out the characters AFTER the space in the postcode and not the characters BEFORE.
Does anybody know how to reverse the following so that it extracts the characters BEFORE the space?
stringVar PCode := {ADDRESS.POSTCODE};
numberVar i:=1;
numberVar Strlen := length (PCode);
numberVar Result:= -1;
while i <= Strlen and Result= -1 do
stringVar Position:= Mid (PCode,i,1);
if Position = " " then
Result:= i;
i:= i + 1;
stringVar PostSearch := mid (PCode, Result+1, (Strlen-Result));
PostSearchHello Tracy,
I am not sure how to help you on this however I have found an interesting code from one of the forums, see if that makes your life any better.
Here's the code to extract the zip. what i did is checked for a number in the string, once i found it i checked
for next 4 character if they are number (as zipcode is 5 digits). if all 5 character are number then i m assiging
it to a variable and coming out of loop otherwise i m going for next charachter.
creat two formulas
1st formula name is "zipcode" which has following code
shared numbervar strlen := length({table.fieldname});
shared stringvar zipcode := "";
shared numbervar i := 0;
shared numbervar a := 0;
shared numbervar b := 0;
shared numbervar c := 0;
shared numbervar z := 0;
for i := 1 to strlen do
if mid({table.fieldname},i,1) in ["0","1","2","3","4","5","6","7","8","9"] then
a := i;
b := i;
c := i + 4;
z := 0;
zipcode := "";
for a := b to c do
( if mid({table.fieldname},a,1) in ["0","1","2","3","4","5","6","7","8","9"] then
(z := z + 1;
zipcode := zipcode &mid({table.fieldname},a,1);
if z = 5 then
i := i+strlen+1;
zipcode)
else
zipcode;
2. the 2nd formula is "zip" which has folllowing code
shared stringvar zipcode;
now keep both the formula in the report and supress zipcode.
You will get the zipcode
As I said this is not mine, I took it from somewhere else however the way he did it, makes more sense.
Let us know if this works.
Regards
Jehanzeb -
JSPs, UTF-8 & multibyte characters
In our project we have a situation where we must output some multibyte characters
to a JSP page. The data is retrieved from an Oracle database using BEA ELink and
XML (don't ask why). The XML-data is UTF-8 encoded, and the data seems to be ok
down to the JSP level, because I can output it to a file and it's properly UTF-8
encoded.
But when I try to write the data to the final reply (using <%=dataObject.getData()%>
the results definitely are not UTF-8 encoded. On the client browser they show
up as garbage, occupying more than twice the actual length of the data. The response
headers and META-tags are all set to UTF-8 encoding, and the browser is set to
use UTF-8.
The funny part is, that the string seems to be encoded twice or something similar
as is shown by the next example:
This is the correct UTF-8 byte sequence for the first twice characters (they are
just generated data for debugging purposes):
C3 89 C3 A5
Which translates to Unicode characters 00C9 and 00E5.
But on the final page that is sent to the client this sequence has been changed
to:
C3 83 E2 80 B0 C3 83 C2 A5
Which just doesn't make sense since it shows up as five different garbage characters.
Does anyone have any ideas what is causing the problem and any suggestions? What
are those extra characters in the final encoding?
.Pete.
It sounds like the Object.toString is coming back already encoded in UTF8,
and thus the JSP writer encodes that UTF8 using UTF8 again, which is what
you see. Try making the String value be:
> ... characters 00C9 and 00E5.
... instead of:
> C3 89 C3 A5
Then it will be encoded correctly.
Peace,
Cameron Purdy
Tangosol Inc.
<< Tangosol Server: How Weblogic applications are customized >>
<< Download now from http://www.tangosol.com/download.jsp >>
"Petteri Räisänen" <[email protected]> wrote in message
news:[email protected]...
>
> In our project we have a situation where we must output some multibyte
characters
> to a JSP page. The data is retrieved from an Oracle database using BEA
ELink and
> XML (don't ask why). The XML-data is UTF-8 encoded, and the data seems to
be ok
> down to the JSP level, because I can output it to a file and it's properly
UTF-8
> encoded.
>
> But when I try to write the data to the final reply (using
<%=dataObject.getData()%>
> the results definitely are not UTF-8 encoded. On the client browser they
show
> up as garbage, occupying more than twice the actual length of the data.
The response
> headers and META-tags are all set to UTF-8 encoding, and the browser is
set to
> use UTF-8.
>
> The funny part is, that the string seems to be encoded twice or something
similar
> as is shown by the next example:
>
> This is the correct UTF-8 byte sequence for the first twice characters
(they are
> just generated data for debugging purposes):
>
> C3 89 C3 A5
>
> Which translates to Unicode characters 00C9 and 00E5.
>
> But on the final page that is sent to the client this sequence has been
changed
> to:
>
> C3 83 E2 80 B0 C3 83 C2 A5
>
> Which just doesn't make sense since it shows up as five different garbage
characters.
>
>
> Does anyone have any ideas what is causing the problem and any
suggestions? What
> are those extra characters in the final encoding?
>
> Pete.
-
How to print Arabic characters in Oracle BI Publisher report
Dear Experts,
Kindly suggest me how to print arabic characters in BI Publisher.
Regards,
Mohansee link
https://blogs.oracle.com/BIDeveloper/entry/non-english_characters_appears -
How to print Special Characters in Sap-Scripts
How to print Special Characters in Sap-Scripts
Thanks,
RaviHi
if u want print special characters we can use hot codes i.e ' ' (single inverted commas). in between these hot codes insert u r special characters.
write ' !@#$%^&*( ) '.
for the above write statement output is
output is !@#$%^&*( ) -
How to Extract Data for a Maintenance View, Structure and Cluster Table
I want to develop 3 Reports
1) in First Report
it consists only two Fields.
Table name : V_001_B
Field Name1: BUKRS
Table name : V_001_B
Field Name2: BUTXT
V_001_B is a Maintenance View
For this one I don't Find any Datasource
For this Maintenance View, How to Extract the Data.
2)
For the 2nd Report also it consists Two Fields
Table name : CSKSZ
Field Name1: KOSTL (cost center)
Table name : CSKSZ
Field Name2: KLTXT (Description)
CSKSZ is a Structure
For this one I don't Find any Datasource
For this Structure How to Extract the Data
3)
For the 3rd Report
in this Report all Fields are belonging to a Table BSEG
BSEG is a Cluster Table
For this one also I can't Find any Datasource,
I find very Few Objects in the Datasource.
For this One, How to Extract the Data.
Please provide me step by step procedure.
Thanks
PriyaHi sachin,
I don't get your point can you Explain me Briefly.
I have two Fields for the 1st Report
BUKRS
BUTXT
In the 2nd Report
KOSTL
KLTXT
If I use 0COSTCENTER_TEXT Data Source
I will get KOSTL Field only
what about KLTXT
Thanks
Priya -
Open Hub: How-to doc "How to Extract data with Open Hub to a Logical File"
Hi all,
We are using open hub to download transaction files from infocubes to application server, and would like to have filename which is dynamic based period and year, i.e. period and year of the transaction data to be downloaded.
I understand we could use logical file for this purpose. However we are not sure how to have the period and year to be dynamically derived in filename.
I have read in sdn a number of posted messages on a similar topic and many have suggested a 'How-to' paper titled "How to Extract data with Open Hub to a Logical Filename". However i could not seem to be able to get document from the link given.
Just wonder if anyone has the correct or latest link to the document, or would appreciate if you could share the document with all in sdn if you have a copy.
Many thanks and best regards,
VictoriaHi,
After creating open hub press F1 in Application server file name text box from the help window there u Click on Maintain 'Client independent file names and file paths' then u will be taken to the Implementation guide screen > click on Cross client maintanance of file name > create a logical file path by clicking on new entiries > after creating logical file path now go to Logical file name definition there give your Logical file , name , physical file (ur file name followed by month or year what ever is applicable (press f1 for more info)) , data format (ASC) , application area (BW) and logical path (choose from F4 selection which u have created first), now goto Assignment of physical path to logical path > give syntax group >physical path is the path u gave at logical file name definition.
however we have created a logical path file name to identify the file by sys date but ur requirement seems to be of dynamic date of tranaction data...may u can achieve this by creating a variable. U can see the help from F1 that would be of much help to u. All the above steps i have explained will help u create a dynamic logical file.
hope this helps u to some extent.
Regards -
How to extract Slide data in 3rd part application from clipboard
I need to be able to copy/paste or drag/drop from PowerPoint into another application (C# WPF). In my OnDrop method the DragEventArgs Data has these formats:
[0] "Preferred DropEffect" string
[1] "InShellDragLoop" string
[2] "PowerPoint 12.0 Internal Slides" string
[3] "ActiveClipBoard" string
[4] "PowerPoint 14.0 Slides Package" string
[5] "Embedded Object" string
[6] "Link Source" string
[7] "Object Descriptor" string
[8] "Link Source Descriptor" string
[9] "PNG" string
[10] "JFIF" string
[11] "GIF" string
[12] "Bitmap" string
[13] "System.Drawing.Bitmap" string
[14] "System.Windows.Media.Imaging.BitmapSource" string
[15] "EnhancedMetafile" string
[16] "System.Drawing.Imaging.Metafile" string
[17] "MetaFilePict" string
[18] "PowerPoint 12.0 Internal Theme" string
[19] "PowerPoint 12.0 Internal Color Scheme" string
The "PowerPoint 14.0 Slides Package" is a byte array... can this be converted into Slides?
If not how would I go about getting high-resolution images + slide text from a drag/drop?
[Originally posted here: http://answers.microsoft.com/en-us/office/forum/office_2013_release-powerpoint/how-to-extract-slide-data-in-3rd-part-application/a0b5ed64-eb77-49bb-bf44-e0732e23a5eb]What I'd like to do:
Open PowerPoint
In PPT open a presentation
In PPT select a slide
Drag it to my 3rd party WPF application
In the 3rd party WPF application drop handler get the slide data (text, background image, etc...).
When I do this I get the DragEventArgs Data (the clipboard data) and it has the 20 supported formats I listed in the 1st post. From these formats #4 seemed like it could have some useful info.
WPF
<Window x:Class="PowerPointDropSlide.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="MainWindow" Height="350" Width="525" AllowDrop="True" Drop="UIElement_OnDrop" DragOver="UIElement_OnDragOver">
<Grid HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Background="LightBlue">
<TextBlock Text="Drop something here!"/>
</Grid>
</Window>
Handlers:
public void UIElement_OnDragOver(object sender, DragEventArgs e)
public void UIElement_OnDrop(object sender, DragEventArgs e)
string[] supportedFormats = e.Data.GetFormats();
object pptSlidesPackage = e.Data.GetData("PowerPoint 14.0 Slides Package"); -
How to extract text from a PDF file using php?
How to extract text from a PDF file using php?
thanks
fabio> Do you know of any other way this can be done?
There are many ways. But this out of scope of this forum. You can try this forum: http://forum.planetpdf.com/
Maybe you are looking for
-
Submitting forms to external sites
Hey All Is it somehow possible to submit forms to external sites using NETUI form tag? In this case its a card-payment server that needs alot of info stored in my controller.jpf. The easiste way to access this, is through NETUI tags. Unfortunatly it
-
Problems w/epson expression XL scanner
we recently bought the above scanner in order to archive fine art silver gelatin prints. We have been scanning or rather trying to scan at 600 dpi, 16 bit gray scale with medium level dust removal saving as TIFs. We managed to do about 60 scans bef
-
I've tried this, but i get no output. Why? while(tokenizer.nextToken()!= StreamTokenizer.TT_EOF) switch(tokenizer.ttype) case StreamTokenizer.TT_EOL: break; case StreamTokeniz
-
Ship to state is not avilable in the DTW Templet
HI experts I want to Import business Partner Through but in the Template of BP Master in DTW Ship to State field and Ship to Block is not available so how can i import this field through DTW without Address template of BP in DTW because i have only o
-
Creating new calanders topics ie. birthdays bills etc.
I have just finished inputting all my required information within the calendar and can not for thelife of me work out how to change the color for birthdays and bills etc.. on the iPad page for the calendar it says 'Create calendars for different sche