How to Serialize a very big Object??
Please help me !
I misread that as sterilize for a moment. Buckets and buckets of dettol! And that's the clean answer.
As Athena indicates you Serialize a very big Object in exactly the same way that you Serialize a very small object.
Are you encountering a problem, and if so what is it?
Dave.
Similar Messages
-
How to read a very BIG XML-File
Hello together,
how can I read a XML file of e.g. 2 GByte in ABAP ( Release 4.6C ). The file should be read from the Application-Server ( not from the Front-End ).
Problem: Too much MEMORY is needed by the upload of the complete file into an internal table. In order to produce a stream, the complete file must be uploaded. The parser works with parse_event ( Event-triggered ) and is not creating a DOM. The parsing itself is no problem.
Possible solution: Feed the stream with parts of the file. But how !?
Here is some coding of the program:
data: l_xml_filename type localfile,
l_event_sub type i,
l_boolean type c,
l_subrc type i.
Datei mit Größe in XML-Tabelle einlesen:
l_xml_filename = 'I:TEMPGeboIsp-PSGK37.xml'.
perform read_file_in_xml_table using p_xml_filename
changing g_xml_table
g_xml_size.
XML-Dokument aus Tabelle bilden: =========================
iXML factory erzeugen:
go_ixml = cl_ixml=>create( ).
stream factory erzeugen:
p_streamfactory = go_ixml->create_stream_factory( ).
XML-Dokument erzeugen:
p_xml_document = go_ixml->create_document( ).
Input-Stream erzeugen:
p_istream = p_streamfactory->create_istream_itable(
table = g_xml_table
size = g_xml_size ).
p_parser = go_ixml->create_parser( stream_factory = p_streamfactory
istream = p_istream
document = p_xml_document ).
l_event_sub = if_ixml_event=>co_event_element_pre +
if_ixml_event=>co_event_element_post.
call method p_parser->set_event_subscription( l_event_sub ).
l_boolean = p_parser->set_dom_generating( ' ' ).
The program works fine but needs too much memory because of the big internal table.
I would be very happy if somebody could help me!
Thanks in advance!
Thomas
P.s.: German answers are welcome!
Message was edited by:
Thomas13 Scheuermann
Message was edited by:
Thomas13 ScheuermannHello myself,
nobody has answered my question, so now I answer myself!!
The wrong part is to read the file with "open dataset" and to create the inputstream with
p_istream = p_streamfactory->create_istream_itable(
table = g_xml_table
size = g_xml_size ).
Better ist to create the inputstream with
p_istream = p_streamfactory->create_istream_uri(
.......................PUBLIC_ID = ''
.......................SYSTEM_ID = '
applserver\I$\TEMP\Datei.XML' ).
In this way no space is needed for the file.
Best regards,
Thomas
Message was edited by:
Thomas13 Scheuermann -
How to serialize a binary tree object
HI Friends,
This is the problem.............................
I got a binary Tree Program with root (object ) containing the elements in the tree.
I got a server which performs insertions and deletions in the root.
Once it has got the operations right it has to send this object to client and Client will perform the search over this Binary Tree.
How to do this................................ Any Idea
Anyone can get me the Code....... It will be great to u....
reply me in this Id : [email protected]Have you looked at the TreeSet code.
This is a collection which stores data as a binary tree. It is serializable, has a look at the readObject and writeObject methods. -
How can I write very big number?
I have an Oracle table with the column "col_name number(20)".
I use the following code for binding:
OCIBindByName(stmt, &number,
errhp, (text *) ":col_name", -1, (dvoid *) &number,
(sword) sizeof(number), SQLT_INT, (dvoid *) 0, (ub2 *) 0,
(ub2 *) 0, (ub4) 0, (ub4 *) 0, OCI_DEFAULT)But the data, written in the table, are incomplete.
I tried SQLT_UIN but results are the same.
Variable number is declared as unsigned longIf your value is truncated, it means it cannot be held into an 32bits integer value.
Your column is declared as NUMBER(20) so it can hold values that exceed the capacity of an 32bits integer value and thus you need to use 64bits integers.
OCI does not support yet (one day maybe, lol) binding 64bits C integers. You need to use OCINumber type for that.
Here is a extract of an answer i made in a previous post about the same subject :
Vincent Rogier wrote:
SQLT_INT or SQLT_UIN are limited to 32bits integers.
To use 64bits integers, the C type is long long and unsigned long long.
Using binary_double is not working because with really big values, there is loss of data.
The only way i found in order to manipualte 64bits integer properly is using OCINumber.
By example in order to define an output placeholder to get data from an number(20) column, you have to :
* define using SQLT_VNU with size = sizeof(OCINumber)
* fetch
* then the buffer is an OCINumber
* use OCINumberToInt() with the sizeof(long long) for rsl_lenght parameter
* then you've got a correct value
You can use as well OCINumberSign() before calling OCINumberToInt() to find out if the value is signed or not, then you pass an unsigned long long or signed long long to OCINumberToInt() -
Question about reading a very big file into a buffer.
Hi, everyone!
I want to randomly load several characters from
GB2312 charset to form a string.
I have two questions:
1. Where can I find the charset table file? I have used
google for hours to search but failed to find GB2312 charset
file out.
2. I think the charset table file is very big and I doubted
whether I can loaded it into a String or StringBuffer? Anyone
have some solutions? How to load a very big file and randomly
select several characters from it?
Have I made myself understood?
Thanks in advance,
GeorgeThe following can give the correspondence between GB2312 encoded byte arrays and characters (in hexadecimal integer expression).
import java.nio.charset.*;
import java.io.*;
public class GBs{
static String convert() throws java.io.UnsupportedEncodingException{
StringBuffer buffer = new StringBuffer();
String l_separator = System.getProperty("line.separator");
Charset chset = Charset.forName("EUC_CN");// GB2312 is an alias of this encoding
CharsetEncoder encoder = chset.newEncoder();
int[] indices = new int[Character.MAX_VALUE+1];
for(int j=0;j<indices.length;j++){
indices[j]=0;
for(int j=0;j<=Character.MAX_VALUE;j++){
if(encoder.canEncode((char)j)) indices[j]=1;
byte[] encoded;
String data;
for(int j=0;j<indices.length;j++) {
if(indices[j]==1) {
encoded =(Character.toString((char)j)).getBytes("EUC_CN");
for(int q=0;q<encoded.length;q++){
buffer.append(Byte.toString(encoded[q]));
buffer.append(" ");
buffer.append(": 0x");buffer.append(Integer.toHexString(j));
buffer.append(l_separator);
return buffer.toString();
//the following is for testing
/*public static void main(String[] args) throws java.lang.Exception{
String str = GBs.convert();
System.out.println(str);*/ -
Monitoring serialization of big objects.
I'm building a comunications architecture for a project that involves the exchange of objects between a client and server (with the usual main+worker threads design). The exchanged messages are sort-of agents that deal with a particular aspect of the application protocol.
The problem is that sometimes we have to send big objects (or send them through a slow link) and I was wondering if there's a way to monitor how many bytes were written to the ObjectOutputStream to generate a progress bar ? (right now we have one in indeterminate mode).
Thanks in advance.I could re-use Count[I/O]Stream from Jakarta Commons Net.
The serialization of the objets is almost trivial. Even with the complicated messages we never ran into any problems. In fact we have an object that holds a compressed image of another serialized object that gets decompressed whenever a message is sent to the wrapper. It works pretty well.
I guess I'll just keep the current "Doin' something really slow, go get a coffee or something" dialog box :) -
Serialization big object(100k--2M) into MYSQL database
Hi all !
I have a big object need to be persisted into MYSQL database in my application.
I use two method to meet this request
1 ---- the object implement interface Serializable, but there is serializationVersionID is not same Exception
sometimes.
2 ---- Use XMLEncoder to save my object, the XMLEncoder generate the object XML text about 200k -- 3 M
, but another problem occur :
when I call flush method of XMLEncoder, my application will full ocuppy CPU time, and memory usage will increase to 90M, sometimes will ocurr "Not enough memory" Error.
Why?
can some one help me?
Thanks in advanced1. You are modifying the signature of one or more of your classes that alters the (system generated) serializationVersionUID and then recompiling. You can declare this for yourself to get control to some extent. See the serialization spec for details.
2. Your object graph is very large, which might be a problem using native or xml formats. Again, the specification details ways in which you can control the serialized form of your objects.
On the information you've provided there's not much more that can be said. -
I have an iPad 1. I filled out a PDF questionaire in adobe reader. When I send it to Dropbox, iBooks,mercury browser the form shows up without the answers . My objective is to email the completed form. Please advise as to how to do thisThanks.
Big JakeIf the PDF is a form fillable PDF, which I assume that it is, email it to yourself or send it to DropBox and open it on your computer. You should see the fields filled in there. It works for me if I do it that way.
Adobe Reader supports it, but the other apps don't. I assume that if you email the PDF to a computer user it would be readable in Adode Reader. -
How does table SMW3_BDOC become very big?
Hi,
The table SMW3_BDOC which store BDocs in my system becomes very big with several million records. Some BDocs in this table are sent several month ago. I'm very strange that why those BDocs were not processed?
If I want to clean this table, will inconsistancy occurrs in system? And how can I clean this table for those very old BDocs?
Thanks a lot for your help!Hi Long,
I have faced the same issue recently on our Production system and this created a huge performance issue and completely blocked the system with TimeOut errors.
I was able to clean up the same by running the report SMO8_FLOW_REORG in SE38.
If you are very sure about cleaning up, first delete all the unnecessary Bdocs and then run this report.
At the same time, check any CSA* queue is stuck in CRM inbound queue SMQ2. If yes, select it, manually unlock it, activate and then refresh. Also check any unnecessary queues stuck up there.
Hope this could help you.
regards,
kalyan -
TS3293 I do not know how the icons on my ipad got very big. How to resore to normal
I do not know what made the screen icons on my ipad very big, so it made it difficult to navigate.
How to restore to normal?
ThanksDouble tap the screen with 3 fingers
-
hi my iphone fell down and there is a very big dent on the left side. the screen split on both side. can i send it to apple and the repair/change it for like 160€ or how does it work?
Service Answer Center - iPhone - http://support.apple.com/kb/index?page=servicefaq&geo=US&product=iphone <-- enter correct country in drag-down menu once on page.
-
How do i open a VERY big file?
I hope someone can help.
I did some testing using a LeCroy LT342 in segment mode. Using the
Labview driver i downloaded the data over GPIB and saved it to a
spreadsheet file. Unfortunately it created very big files (ranging from
200MB to 600MB). I now need to process them but Labview doesn't like
them. I would be very happy to split the files into an individual file
for each row (i can do this quite easily) but labview just sits there
when i try to open the file.
I don't know enough about computers and memory (my spec is 1.8GHz
Pentium 4, 384MB RAM) to figure out whether if i just leave it for long
enough it will do the job or not.
Has anyone any experience or help they could offer?
Thanks,
PhilWhen you open (and read) a file you usually move it from your hard disk (permanent storage) to ram. This allows you to manipulate it in high speeds using fast RAM memory, if you don't have enough memory (RAM) to read the whole file, you will be forced to use virtual memory (uses swap space on the HD as "virtual" RAM) which is very slow. Since you only have 384 MB of RAM and want to process Huge files (200MB-600MB) you could easily and inexpensively upgrade to 1GB of RAM and see large speed increases. A better option is to lode the file in chunks looking at some number of lines at a time and processing this amount of data and repeat until the file is complete, this will be more programming but will allow you to use much lass RAM at any instance.
Paul
Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA -
I have a desktop K7-ll74PC. The Icons have gotten very big. how to reset.
My icons have gotten very big and I have called support but to no avail. They are just reading from a manual and I am beside myself to find the right solution. I have reset the computer many times per the help desk. I am ready to return the computer and moniter.
banditsmask wrote: My icons have gotten very big and I have called support but to no avail. They are just reading from a manual and I am beside myself to find the right solution. I have reset the computer many times per the help desk. I am ready to return the computer and moniter.
Hello banditsmask, You might try resetting the screen resolution to a higher resolution.
Right click on the desktop and select screen resolution and slide the slider tab to a resolution of your choosing.
If the resolution is set to 800x600, the icons would be very large. Setting the resolution to a hgher setting will make everything on the screen look smaller.
It appears the model number you listed is incorrect, since the HP support site could not locate a system with the model number you listed.
Please click the White Kudos star on the left, to say thanks.
Please mark Accept As Solution if it solves your problem. -
Serialization of Vector Graphics object
I'm writing a Vector base Graphics Software, all objects are stored into a Vector....
Originally, i wanna using XML for saving , but one kind of my objects contains BufferedImage which could not represented by XML , so i decide to save by "ObjectOutputStream".
i know that Vector implements Serializable, plus, i have implements Serializable in my custom objects too....
Unfortunely, it got error as follow:
java.io.WriteAbortedException: Writing aborted by exception; java.io.NotSerializ
ableException: java.awt.BasicStroke
i guess it is because my objects contain some attributes that do not support Serializable , such as BasicStroke, Paint, and some other alse....
Do anyone know how to solve it ??
Thanks very much ~~~Here is the well functionning code I implemented to save a BasicStroke, where style is the attribut name for my BasicStroke instance.
Declare your BasicStroke attribut as transiant to avoid exception with defaultWrite() operation as explained in previous messages
private void writeObject(ObjectOutputStream s) throws IOException {
s.defaultWriteObject();
float width = this.style.getLineWidth();
int cap = this.style.getEndCap();
int join = this.style.getLineJoin();
float miterlimit = this.style.getMiterLimit();
float[] dash = this.style.getDashArray();
float dash_phase = this.style.getDashPhase();
s.writeFloat(width);
s.writeInt(cap);
s.writeInt(join);
s.writeFloat(miterlimit);
s.writeObject(dash);
s.writeFloat(dash_phase);
private void readObject(ObjectInputStream s) throws IOException {
try{
s.defaultReadObject();
float width = s.readFloat();
int cap = s.readInt();
int join = s.readInt();
float miterlimit = s.readFloat();
float[] dash = (float[])s.readObject();
float dash_phase = s.readFloat();
this.style = new BasicStroke(width,cap,join,miterlimit,dash,dash_phase);
}catch(ClassNotFoundException ioe){System.err.println(ioe.getMessage());} -
How to know which master data objects need to activated in R3
SALES OVERVIEW CUBE -0SD_C03
How to know which master data objects need to activated from delivery version to active version in R/3 for a particular standard cube like 0SD_C03.
its very urgent please advise.
R/3 in RSA5
Sales Master Data
0ACCNT_ASGN_TEXT Account assignment group for this customer
0ACCNT_GRP_TEXT Customer account group
0BILBLK_DL_TEXT Locked
0BILBLK_ITM_TEXT Billing block for item
0BILL_BLOCK_TEXT Billing block in SD document
0BILL_CAT_TEXT Billing Category
0BILL_RELEV_TEXT Relevant for Billing
0BILL_RULE_TEXT Billing rule
0BILL_TYPE_TEXT Billing Type
0CONSUMER_ATTR Consumer
0CONSUMER_LKLS_HIER Consumer
0CONSUMER_TEXT Consumer
0CUST_CLASS_TEXT Customer Classification
0CUST_GROUP_TEXT Customer Group
0CUST_GRP1_TEXT Customer Group 1
0CUST_GRP2_TEXT Customer Group 2
0CUST_GRP3_TEXT Customer Group 3
0CUST_GRP4_TEXT Customer Group 4
0CUST_GRP5_TEXT Customer Group 5
0DEALTYPE_TEXT Sales Deal Type
0DEL_BLOCK_TEXT Delivery block (document header)
0DEL_TYPE_TEXT Delivery Type
0DISTR_CHAN_TEXT Distribution Channel
0DIVISION_TEXT Division
0DLV_BLOCK_TEXT Schedule line blocked for delivery
0DOC_CATEG_TEXT SD Document Category
0DOC_TYPE_TEXT Sales Document Type
0INCOTERMS_TEXT Incoterms (Part 1)
0INDUSTRY_TEXT Industry keys
0IND_CODE_3_TEXT Industry code 3
0IND_CODE_4_TEXT Industry code 4
0IND_CODE_5_TEXT Industry code 5
0IND_CODE_TEXT Industry code
0ITEM_CATEG_TEXT Sales document item category
0ITM_TYPE_TEXT FS item type
0KHERK_TEXT Condition Origin
0MATL_GRP_1_TEXT Material Group1
0MATL_GRP_2_TEXT Material Group 2
0MATL_GRP_3_TEXT Material Group 3
0MATL_GRP_4_TEXT Material Group 4
0MATL_GRP_5_TEXT Material Group 5
0MATL_TYPE_TEXT Material Type
0MAT_STGRP_TEXT Material statistics group
0NIELSEN_ID_TEXT Nielsen ID
0ORD_REASON_TEXT Order reason (reason for the business transaction)
0PICK_INDC_TEXT Indicator for picking control
0PRODCAT_TEXT Product Catalog Number
0PROD_HIER_TEXT Product Hierarchy
0PROMOTION_ATTR Promotion
0PROMOTION_TEXT Promotion
0PROMOTYPE_TEXT Promotion Type
0PROV_GROUP_TEXT Commission Group
0REASON_REJ_TEXT Reason for rejection of quotations and sales orders
0REBATE_GRP_TEXT Volume rebate group
0RECIPCNTRY_TEXT Destination country
0ROUTE_TEXT Route
0SALESDEAL_ATTR Sales deal
0SALESDEAL_TEXT Sales deal
0SALESORG_ATTR Sales organization
0SALESORG_TEXT Sales Organization
0SALES_DIST_TEXT Sales district
0SALES_GRP_TEXT Sales Group
0SALES_OFF_TEXT Sales Office
0SCHD_CATEG_TEXT Schedule line category
0SHIP_POINT_TEXT Shipping point/receiving point
In BW
Base Unit of Measure 0BASE_UOM
Bill-to party 0BILLTOPRTY
Calendar Day 0CALDAY
Calendar Year/Month 0CALMONTH
Calendar Year/Week 0CALWEEK
Change Run ID 0CHNGID
Company code 0COMP_CODE
Cost in statistics currency 0COST_VAL_S
Credit/debit posting (C/D) 0DEB_CRED
Distribution Channel 0DISTR_CHAN
Division 0DIVISION
Number of documents 0DOCUMENTS
Sales Document Category 0DOC_CATEG
Document category /Quotation/Order/Delivery/Invoice 0DOC_CLASS
Number of Document Items 0DOC_ITEMS
Fiscal year / period
Fiscal year variant 0FISCVARNT
Gross weight in kilograms 0GR_WT_KG
Number of Employees 0HDCNT_LAST
Material 0MATERIAL
Net value in statistics currency 0NET_VAL_S
Net weight in kilograms 0NT_WT_KG
Open orders quantity in base unit of measure 0OPORDQTYBM
Net value of open orders in statistics currency 0OPORDVALSC
Payer 0PAYER
Plant 0PLANT
Quantity in base units of measure 0QUANT_B
Record type 0RECORDTP
Request ID 0REQUID
Sales Employee 0SALESEMPLY
Sales Organization 0SALESORG
Sales group 0SALES_GRP
Sales Office 0SALES_OFF
Shipping point 0SHIP_POINT
Ship-To Party 0SHIP_TO
Sold-to party 0SOLD_TO
Statistics Currency 0STAT_CURR
In R3 RSA5 we have all the Master data data sources as mentioned above, and BW also. How to find the related Master data Infosource in R/3 Master data Data sources.
Thanks in advance,
Bhima.
Message was edited by: Bhima Chandra Sekhar GuntlaHi,
<i>How to know which master data objects need to activated from delivery version to active version in R/3 for a particular standard cube like 0SD_C03.</i>
I think, you are looking for master data sources(text,attributes,hier).Am i right?
If so, This cube has almost all SD master data characterstics. So you can activate all the all master data datasources of SD in r/3 (SD-IO).
Any way you requirement does not stop only by using this cube . You will activate all other cubes in SD also. So if you want to activate only needed master data datasources when you are activating a cube, the job becomes senseless. There is no problem(wrong) in activating all master data available under that application , even though you want to activate only one cube.
With rgds,
Anil Kumar Sharma .P
Maybe you are looking for
-
How can I pass a docking container to a program in a non-simple context?
Dear colleagues, I want to pass a docking container like the one in SE80 to another program. The following code works fine: REPORT z_moving_dock. DATA: cl_docker type REF TO cl_gui_docking_container. PARAMETERS: test. INITIALIZATION. CREATE OBJECT
-
Need some help on this one. This is ISE 1.1.1 and WLC 7.2 I want to use CWA and Webauth for guest users, and I have configured that on the ISE and WLC. This is working but I need some clarification :-) First I tried to use AuthC policy with allowed p
-
Hi, Can I integrated Cisco ISE to use external URL for guest authentication ? regards Prasad
-
Third party presets for battery 3 migrate to battery 4
I own battery 3 and logic 9 and Made presets of kits all for my BATTERY 3 in logic so theyre right there in my lib in Logic 9 .... Now i got into this battery 4 as well as LOGIC X..... all i want to do it make my battery 3 logic presets work for batt
-
How to Display last Login Details in MasterPage
Hi, we have one requirement,i.e.When any user logged into the site, we need to display last login time for that user. We are getting last logged user date and time based on user id and updating that value into the list column(like this 4/3/2015 8:38: