Doing XOR with many byte values - am I stupid?
Hi,
I am having a problem by using the Java XOR Operator.
Sonething, call it module, delivers me an array of bytes. It includes different byte values and in the last field the checksum. The checksum you can get by doing:
(((1st Databyte XOR 2nd Databyte) XOR 3rd Databyte) XOR 4th Databyte) XOR ....
and so on. So I wrote the piece of code:
byte checksum = ((Integer)dataVector.get(0)).byteValue();
for(int i = 1; i < dataVector.size()-1; i++){
checksum= (byte)(checksum^ ((Integer)dataVector.get(i)).byteValue());
if(checksum == ((Integer)dataVector.get(dataVector.size()-1)).byteValue())
System.out.println("Yeah! Got it!");OK, it's a Vector, i see, but anyway I am handling with byte values so that doesn't matter.
So please say there is a mistake in the code.
I would implement it a little differently but your approach is correct too. Examine this small program:public class Checksum {
// my way
static boolean check(byte[] b) {
byte checksum = 0;
for (int i = 0; i < b.length; i++) {
checksum ^= b;
return checksum == 0;
// your way
static boolean check2(byte[] b) {
byte checksum = b[0];
for (int i = 1; i < b.length - 1; i++) {
checksum ^= b[i];
return checksum == b[b.length - 1];
public static void main(String[] args) {
// 98 is all the other numbers XOR'ed together:
byte[] array = {34, 124, -45, 76, 9, 71, -34, -56, -5, 98};
System.out.println("Check1: " + check(array));
System.out.println("Check2: " + check2(array));
// introduce a small mistake in the first element:
byte[] wrongArray = {24, 124, -45, 76, 9, 71, -34, -56, -5, 98};
System.out.println("Check1: " + check(wrongArray));
System.out.println("Check2: " + check2(wrongArray));
}The output is what you'd expect: Check1: true
Check2: true
Check1: false
Check2: falseEither your checksum is not calculated the way you think it is or there's some other problem. Why do have a Vector of [i]Integers when you should be working with bytes?
Similar Messages
-
Hi,
I have a table with many LONG fields (28). So far, everythings works fine.
However, if I add another LONG field I cannot insert a dataset anymore
(29 LONG fields).
Does there exist a MaxDB parameter or anything else I can change to make inserts possible again?
Thanks in advance
Michael
appendix:
- Create and Insert command and error message
- MaxDB version and its parameters
Create and Insert command and error message
CREATE TABLE "DBA"."AZ_Z_TEST02"
"ZTB_ID" Integer NOT NULL,
"ZTB_NAMEOFREPORT" Char (400) ASCII DEFAULT '',
"ZTB_LONG_COMMENT" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_00" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_01" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_02" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_03" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_04" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_05" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_06" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_07" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_08" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_09" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_10" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_11" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_12" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_13" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_14" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_15" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_16" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_17" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_18" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_19" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_20" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_21" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_22" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_23" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_24" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_25" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_26" LONG ASCII DEFAULT '',
PRIMARY KEY ("ZTB_ID")
The insert command
INSERT INTO AZ_Z_TEST02 SET ztb_id = 87
works fine. If I add the LONG field
"ZTB_LONG_TEXTBLOCK_27" LONG ASCII DEFAULT '',
the following error occurs:
Auto Commit: On, SQL Mode: Internal, Isolation Level: Committed
General error;-7032 POS(1) SQL statement not allowed for column of data type LONG
INSERT INTO AZ_Z_TEST02 SET ztb_id = 88
MaxDB version and its parameters
All db params given by
dbmcli -d myDB -u dbm,dbm param_directgetall > maxdb_params.txt
are
KERNELVERSION KERNEL 7.5.0 BUILD 026-123-094-430
INSTANCE_TYPE OLTP
MCOD NO
RESTART_SHUTDOWN MANUAL
SERVERDBFOR_SAP YES
_UNICODE NO
DEFAULT_CODE ASCII
DATE_TIME_FORMAT INTERNAL
CONTROLUSERID DBM
CONTROLPASSWORD
MAXLOGVOLUMES 10
MAXDATAVOLUMES 11
LOG_VOLUME_NAME_001 LOG_001
LOG_VOLUME_TYPE_001 F
LOG_VOLUME_SIZE_001 64000
DATA_VOLUME_NAME_0001 DAT_0001
DATA_VOLUME_TYPE_0001 F
DATA_VOLUME_SIZE_0001 64000
DATA_VOLUME_MODE_0001 NORMAL
DATA_VOLUME_GROUPS 1
LOG_BACKUP_TO_PIPE NO
MAXBACKUPDEVS 2
BACKUP_BLOCK_CNT 8
LOG_MIRRORED NO
MAXVOLUMES 22
MULTIO_BLOCK_CNT 4
DELAYLOGWRITER 0
LOG_IO_QUEUE 50
RESTARTTIME 600
MAXCPU 1
MAXUSERTASKS 50
TRANSRGNS 8
TABRGNS 8
OMSREGIONS 0
OMSRGNS 25
OMS_HEAP_LIMIT 0
OMS_HEAP_COUNT 1
OMS_HEAP_BLOCKSIZE 10000
OMS_HEAP_THRESHOLD 100
OMS_VERS_THRESHOLD 2097152
HEAP_CHECK_LEVEL 0
ROWRGNS 8
MINSERVER_DESC 16
MAXSERVERTASKS 20
_MAXTRANS 288
MAXLOCKS 2880
LOCKSUPPLY_BLOCK 100
DEADLOCK_DETECTION 4
SESSION_TIMEOUT 900
OMS_STREAM_TIMEOUT 30
REQUEST_TIMEOUT 5000
USEASYNC_IO YES
IOPROCSPER_DEV 1
IOPROCSFOR_PRIO 1
USEIOPROCS_ONLY NO
IOPROCSSWITCH 2
LRU_FOR_SCAN NO
PAGESIZE 8192
PACKETSIZE 36864
MINREPLYSIZE 4096
MBLOCKDATA_SIZE 32768
MBLOCKQUAL_SIZE 16384
MBLOCKSTACK_SIZE 16384
MBLOCKSTRAT_SIZE 8192
WORKSTACKSIZE 16384
WORKDATASIZE 8192
CATCACHE_MINSIZE 262144
CAT_CACHE_SUPPLY 1632
INIT_ALLOCATORSIZE 229376
ALLOW_MULTIPLE_SERVERTASK_UKTS NO
TASKCLUSTER01 tw;al;ut;2000sv,100bup;10ev,10gc;
TASKCLUSTER02 ti,100dw;30000us;
TASKCLUSTER03 compress
MPRGN_QUEUE YES
MPRGN_DIRTY_READ NO
MPRGN_BUSY_WAIT NO
MPDISP_LOOPS 1
MPDISP_PRIO NO
XP_MP_RGN_LOOP 0
MP_RGN_LOOP 0
MPRGN_PRIO NO
MAXRGN_REQUEST 300
PRIOBASE_U2U 100
PRIOBASE_IOC 80
PRIOBASE_RAV 80
PRIOBASE_REX 40
PRIOBASE_COM 10
PRIOFACTOR 80
DELAYCOMMIT NO
SVP1_CONV_FLUSH NO
MAXGARBAGECOLL 0
MAXTASKSTACK 1024
MAX_SERVERTASK_STACK 100
MAX_SPECIALTASK_STACK 100
DWIO_AREA_SIZE 50
DWIO_AREA_FLUSH 50
FBM_VOLUME_COMPRESSION 50
FBM_VOLUME_BALANCE 10
FBMLOW_IO_RATE 10
CACHE_SIZE 10000
DWLRU_TAIL_FLUSH 25
XP_DATA_CACHE_RGNS 0
DATACACHE_RGNS 8
XP_CONVERTER_REGIONS 0
CONVERTER_REGIONS 8
XP_MAXPAGER 0
MAXPAGER 11
SEQUENCE_CACHE 1
IDXFILELIST_SIZE 2048
SERVERDESC_CACHE 73
SERVERCMD_CACHE 21
VOLUMENO_BIT_COUNT 8
OPTIM_MAX_MERGE 500
OPTIM_INV_ONLY YES
OPTIM_CACHE NO
OPTIM_JOIN_FETCH 0
JOIN_SEARCH_LEVEL 0
JOIN_MAXTAB_LEVEL4 16
JOIN_MAXTAB_LEVEL9 5
READAHEADBLOBS 25
RUNDIRECTORY E:\_mp\u_v_dbs\EVERW_C5
_KERNELDIAGFILE knldiag
KERNELDIAGSIZE 800
_EVENTFILE knldiag.evt
_EVENTSIZE 0
_MAXEVENTTASKS 1
_MAXEVENTS 100
_KERNELTRACEFILE knltrace
TRACE_PAGES_TI 2
TRACE_PAGES_GC 0
TRACE_PAGES_LW 5
TRACE_PAGES_PG 3
TRACE_PAGES_US 10
TRACE_PAGES_UT 5
TRACE_PAGES_SV 5
TRACE_PAGES_EV 2
TRACE_PAGES_BUP 0
KERNELTRACESIZE 648
EXTERNAL_DUMP_REQUEST NO
AKDUMP_ALLOWED YES
_KERNELDUMPFILE knldump
_RTEDUMPFILE rtedump
UTILITYPROTFILE dbm.utl
UTILITY_PROTSIZE 100
BACKUPHISTFILE dbm.knl
BACKUPMED_DEF dbm.mdf
MAXMESSAGE_FILES 0
EVENTALIVE_CYCLE 0
_SHAREDDYNDATA 10280
_SHAREDDYNPOOL 3607
USE_MEM_ENHANCE NO
MEM_ENHANCE_LIMIT 0
__PARAM_CHANGED___ 0
__PARAM_VERIFIED__ 2008-05-13 13:47:17
DIAG_HISTORY_NUM 2
DIAG_HISTORY_PATH E:\_mp\u_v_dbs\EVERW_C5\DIAGHISTORY
DIAGSEM 1
SHOW_MAX_STACK_USE NO
LOG_SEGMENT_SIZE 21333
SUPPRESS_CORE YES
FORMATTING_MODE PARALLEL
FORMAT_DATAVOLUME YES
HIRES_TIMER_TYPE CPU
LOAD_BALANCING_CHK 0
LOAD_BALANCING_DIF 10
LOAD_BALANCING_EQ 5
HS_STORAGE_DLL libhsscopy
HS_SYNC_INTERVAL 50
USE_OPEN_DIRECT NO
SYMBOL_DEMANGLING NO
EXPAND_COM_TRACE NO
OPTIMIZE_OPERATOR_JOIN_COSTFUNC YES
OPTIMIZE_JOIN_PARALLEL_SERVERS 0
OPTIMIZE_JOIN_OPERATOR_SORT YES
OPTIMIZE_JOIN_OUTER YES
JOIN_OPERATOR_IMPLEMENTATION IMPROVED
JOIN_TABLEBUFFER 128
OPTIMIZE_FETCH_REVERSE YES
SET_VOLUME_LOCK YES
SHAREDSQL NO
SHAREDSQL_EXPECTEDSTATEMENTCOUNT 1500
SHAREDSQL_COMMANDCACHESIZE 32768
MEMORY_ALLOCATION_LIMIT 0
USE_SYSTEM_PAGE_CACHE YES
USE_COROUTINES YES
MIN_RETENTION_TIME 60
MAX_RETENTION_TIME 480
MAX_SINGLE_HASHTABLE_SIZE 512
MAX_HASHTABLE_MEMORY 5120
HASHED_RESULTSET NO
HASHED_RESULTSET_CACHESIZE 262144
AUTO_RECREATE_BAD_INDEXES NO
LOCAL_REDO_LOG_BUFFER_SIZE 0
FORBID_LOAD_BALANCING NO>
Lars Breddemann wrote:
> Hi Michael,
>
> this really looks like one of those "Find-the-5-errors-in-the-picture" riddles to me.
> Really.
>
> Ok, first to your question: this seems to be a bug - I could reproduce it with my 7.5. Build 48.
> Anyhow, when I use
>
> insert into "AZ_Z_TEST02" values (87,'','','','','','','','','','','','','','','',''
> ,'','','','','','','','','','','','','','','','')
>
> it works fine.
It solves my problem. Thanks a lot. -- I hardly can believe that this is all needed to solve the bug. This may be the reason why I have not given it a try.
>
Since explicitely specifying all values for an insert is a good idea anyhow (you can see directly, what value the new tupel will have), you may want to change your code to this.
>
> Now to the other errors:
> - 28 Long values per row?
> What the heck is wrong with the data design here?
> Honestly, you can save data up to 2 GB in a BLOB/CLOB.
> Currently, your data design allows 56 GB per row.
> Moreover 26 of those columns seems to belong together originally - why do you split them up at all?
>
> - The "ZTB_NAMEOFREPORT" looks like something the users see -
> still there is no unique constraint preventing that you get 10000 of reports with the same name...
You are right. This table looks a bit strange. The story behind it is: Each crystal report in the application has a few textblocks which are the same for all the e.g. persons the e.g. letter is created for. Principally, the textblocks could be directy added to the crystal report. However, as it is often the case, these textblocks may change once in a while. Thus, I put the texts of the textblock into this "strange" db table (one row for each report, one field for each textblock, the name of the report is given by "ztb_nameofreport"). And the application offers a menue by which these textblocks can be changed. Of course, the fields in the table could be of type CHAR, but LONG has the advantage that I do not have to think about the length of the field, since sometime the texts are short and sometimes they are really long.
(These texts would blow up the sql select command of the crystal report very much if they were integrated into the this select command. Thus it is realized in another way: the texts are read before the crystal report is loaded, then the texts are "given" to the crystal report (by its parameters), and finally the crystal report is loaded.)
>
- MaxDB 7.5 Build 26 ?? Where have you been the last years?
> Really - download the 7.6.03 Version [here|https://www.sdn.sap.com/irj/sdn/maxdb-downloads] from SDN and upgrade.
> With 7.6. I was not able to reproduce your issue at all.
The customer still has Win98 clients. MaxDB odbc driver 7.5.00.26 does not work for them. I got the hint to use odbc driver 7.3 (see [lists.mysql.com/maxdb/25667|lists.mysql.com/maxdb/25667]). Do MaxDB 7.6 and odbc driver 7.3 work together?
All Win98 clients may be replaced by WinXP clients in the near future. Then, an upgrade may be reasonable.
>
- Are you really putting your data into the DBA schema? Don't do that, ever.
> DBM/SUPERDBA (the sysdba-schemas) are reserved for the MaxDB system tables.
> Create a user/schema for your application data and put your tables into that.
>
> KR Lars
In the first MaxDB version I used, schemas were not available. I haven't changed it afterwards. Is there an easy way to "move an existing table into a new schema"?
Michael -
Strange issue with POF: byte array with the value 94
This is a somewhat strange issue we’ve managed to reduce to this test case. We’ve also seen similar issues with chars and shorts as well. It’s only a problem if the byte value inside the byte array is equal to 94! A value of 93, 95, etc, seems to be ok.
Given the below class, the byte values both in the array and the single byte value are wrong when deserializing. The value inside the byte array isn’t what we put in (get [75] instead of [94]) and the single byte value is null (not 114).
Pof object code:
package com.test;
import java.io.IOException;
import java.util.Arrays;
import com.tangosol.io.pof.PofReader;
import com.tangosol.io.pof.PofWriter;
import com.tangosol.io.pof.PortableObject;
public class PofObject1 implements PortableObject {
private byte[] byteArray;
private byte byteValue;
public void setValues() {
byteArray = new byte[] {94};
byteValue = 114;
@Override
public void readExternal(PofReader reader) throws IOException {
Object byteArray = reader.readObjectArray(0, null);
Object byteValue = reader.readObject(1);
System.out.println(Arrays.toString((Object[])byteArray));
System.out.println(byteValue);
if (byteValue == null) throw new IOException("byteValue is null!");
@Override
public void writeExternal(PofWriter writer) throws IOException {
writer.writeObject(0, byteArray);
writer.writeObject(1, byteValue);
Using writer.writeObjectArray(0, byteArray); instead of writer.writeObject(0, byteArray); doesn't help. In this case byteArray would be of type Object[] (as accessed through reflection).
This is simply put in to a distributed cache and then fetched back. No EPs, listeners or stuff like that involved:
public static void main(String... args) throws Exception {
NamedCache cache = CacheFactory.getCache("my-cache");
PofObject1 o = new PofObject1();
o.setValues();
cache.put("key1", o);
cache.get("key1");
Only tried it with Coherecne 3.7.1.3.
Cache config file:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>my-cache</cache-name>
<scheme-name>my-cache</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<distributed-scheme>
<scheme-name>my-cache</scheme-name>
<service-name>my-cache</service-name>
<serializer>
<class-name>
com.tangosol.io.pof.ConfigurablePofContext
</class-name>
<init-params>
<init-param>
<param-type>string</param-type>
<param-value>pof-config.xml</param-value>
</init-param>
</init-params>
</serializer>
<lease-granularity>thread</lease-granularity>
<thread-count>10</thread-count>
<backing-map-scheme>
<local-scheme>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
</cache-config>
POF config file:
<?xml version="1.0"?>
<pof-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-pof-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-pof-config coherence-pof-config.xsd">
<user-type-list>
<!-- coherence POF user types -->
<include>coherence-pof-config.xml</include>
<user-type>
<type-id>1460</type-id>
<class-name>com.test.PofObject1</class-name>
</user-type>
</user-type-list>
</pof-config>Hi,
POF uses certain byte values as an optimization to represent well known values of certain Object types - e.g. boolean True and False, some very small numbers, null etc... When you do read/write Object instead of using the correct method I suspect POF gets confused over the type and value that the field should be.
There are a number of cases where POF does not know what the type is - Numbers would be one of these, for example if I stored a long of value 10 on deserialization POF would not know if that was an int, long double etc... so you have to use the correct method to get it back. Collections are another - If you serialize a Set all POF knows is that you have serialized some sort of Collection so unless you are specific when deserializing you will get back a List.
JK -
How many bytes does a refresh check on the DDLOG table cost?
Hello,
each application server checks after "rdisp/bufrefreshtime" in the DDLOG table on the database whether one of it's tables or entries of tables are invalid.
Only tables or entries of tables that are invalid appear in the DDLOG table. If an application servers gets to know which of his tables are invalid, he can synchronize with the next read access.
Does anybody know how many bytes such a check cost?
The whole DDLOG must be read by each application server, so it depends on the number of entries in the DDLOG.
Does anybody know?
thx, holgerHi,
except of system and some timestamps, everything is stored in a raw field.
Checking FM SBUF_SEL_DDLOG_RECS I found some additional info:
- There are several synchronization classes
- Classes 8 and 16 don't contain table or key info -> complete buffer refresh
- Other classes should have a table name
- In this case is an option for a key definition
-> I guess, generic and single buffers are handled with corresponding key fields, full buffered is probably handled without key fields.
An entry in DDLOG itself is the flag / mark for: this buffer is invalid.
It's obviously single/generic key specific - otherwise the whole concept of single/generic key would be obsolete.
Christian -
How can I filter to find photos NOT pinned to a map? I have 28,000 phots with many mapped and many not. The Search function does not include GPS data. I haven't found way to search metadata inside or out of Elements.
How can I filter to find photos NOT pinned to a map? I have 28,000 phots with many mapped and many not. The Search function does not include GPS data. I haven't found way to search metadata inside or out of Elements.
-
Why does apple release seri in beta with many problems
why does apple release seri in beta with many problems
terryfromprescott wrote:
why does apple release seri in beta with many problems
That is the point of a beta version.
Software in the beta phase will generally have many more bugs in it than completed software, as well as speed/performance issues. The focus of beta testing is reducing impacts to users, often incorporating usability testing. "
http://en.wikipedia.org/wiki/Beta_version#Beta -
How many bytes does a DATE use?
Having trouble finding this in the 10g documentation. How many bytes does a date data type use?
thanks.Take a look to the online documentation linked below :
Oracle® Database SQL Reference
10g Release 2 (10.2)
Part Number B14200-02
Oracle Built-in Datatypes
expecially code datatype 12.
Valid date range from January 1, 4712 BC to December 31, 9999 AD. The default format is determined explicitly by the NLS_DATE_FORMAT parameter or implicitly by the NLS_TERRITORY parameter. The size is fixed at 7 bytes. This datatype contains the datetime fields YEAR, MONTH, DAY, HOUR, MINUTE, and SECOND. It does not have fractional seconds or a time zone.
Nicolas. -
I have a 2-PMS color logo with many tints in it of the 2 colors. When I replace the swatches in the logo to new colors, they convert from Book Color to CMYK. Can the printer work with that? How can I kee it a 2-color separation?
MaryFlThis is what I understand. You have a logo in a document that uses PMS 'A' and PMS 'B' colors and you want to change to colors PMS 'C' and PMS 'D'. If this is correct there are different ways to do this, but here is a traditional way:
Make sure Select Same Tint is turned OFF in the Preferences. And all objects are unlocked in the document.
Get the two PMS (C and D) in the document's Swatches Panel.
Select PMS 'A' and bring the 'Fill' box in focus.
From the Select menu choose Same > Fill Color.
Click on PMS 'C' in Swatches panel to replace all PMS 'A' fills with PMS 'C'.
Deselect all.
Again select PMS 'A' and this time bring 'Stroke' box in focus.
From the Select menu choose Same > Stroke Color.
Click on PMS 'C' in Swatches panel to replace all PMS 'A' strokes with PMS 'C'.
Deselect all.
Similarly using the same approach replace PMS 'B' wit PMS 'D'.
If you have CS3 / CS4 and the PMS colors in the logo are easily identifiable, you can use Live Color to change the colors pretty easily.
Hope this helps!
- Neeraj
New Note: If there are gradients / blends that use these PMS colors the above approach would not work, but Live Color can. -
Sparse table with many columns
Hi,
I have a table that contains around 800 columns. The table is a sparse table such that many rows
contain up to 50 populated columns (The others contain NULL).
My questions are:
1. Table that contains many columns can cause a performance problem? Is there an alternative option to
hold table with many columns efficiently?
2. Does a row that contains NULL values consume storage space?
Thanks
dyahav[NULLs Indicate Absence of Value|http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10743/schema.htm#sthref725]
A null is the absence of a value in a column of a row. Nulls indicate missing, unknown, or inapplicable data. A null should not be used to imply any other value, such as zero. A column allows nulls unless a NOT NULL or PRIMARY KEY integrity constraint has been defined for the column, in which case no row can be inserted without a value for that column.
Nulls are stored in the database if they fall between columns with data values. In these cases they require 1 byte to store the length of the column (zero).
Trailing nulls in a row require no storage because a new row header signals that the remaining columns in the previous row are null. For example, if the last three columns of a table are null, no information is stored for those columns. In tables with many columns, the columns more likely to contain nulls should be defined last to conserve disk space.
Most comparisons between nulls and other values are by definition neither true nor false, but unknown. To identify nulls in SQL, use the IS NULL predicate. Use the SQL function NVL to convert nulls to non-null values.
Nulls are not indexed, except when the cluster key column value is null or the index is a bitmap index.>
My guess for efficiently storing this information would be to take any columns that are almost always null and place them at the end of the table definition so they don't consume any space.
HTH! -
OutputStream: How many bytes are sent?
Dear Friends,
I use the following code to upload a file.
byte[] buf = new byte[5000];
int nread;
int navailable;
synchronized (in) {
while((nread = in.read(buf, 0, buf.length)) >= 0) {
//Transfer
out.flush();
out.write(buf, 0, nread);
out.flush();
buf = null;But how can I know how many bytes already are sent?
Is there a good way to do that?
Thank you!
With best regards
InnoOK, but don't be scared ;-)
EDIT: Does someone know why?
Do you need more information?
Only the function connect(), pipe() and the declarations are important.
Ignore all the other ones, please. :-)
import java.applet.Applet;
import java.io.OutputStream;
import java.net.URLConnection;
import java.net.URL;
import java.io.IOException;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
import java.io.File;
import java.io.InputStream;
import java.util.Random;
import java.io.FileInputStream;
import java.util.Iterator;
import javax.swing.JProgressBar;
* <p>Title: Client HTTP Request class</p>
* <p>Description: this class helps to send POST HTTP requests with various form data,
* including files. Cookies can be added to be included in the request.</p>
* @author Vlad Patryshev
* @version 1.0
public class ClientHttpRequest extends Thread {
URLConnection connection;
OutputStream os = null;
Map cookies = new HashMap();
long filesize;
private OutputStream osw=null;
protected void connect() throws IOException {
if (os == null)os = connection.getOutputStream();
protected void write(char c) throws IOException {
connect();
os.write(c);
protected void write(String s) throws IOException {
connect();
os.write(s.getBytes());
protected void newline() throws IOException {
connect();
write("\r\n");
protected void writeln(String s) throws IOException {
connect();
write(s);
newline();
private Random random = new Random();
protected String randomString() {
return Long.toString(random.nextLong(), 36);
String boundary = "---------------------------" + randomString() + randomString() + randomString();
private void boundary() throws IOException {
write("--");
write(boundary);
* Creates a new multipart POST HTTP request on a freshly opened URLConnection
* @param connection an already open URL connection
* @throws IOException
public ClientHttpRequest(URLConnection connection) throws IOException {
this.connection = connection;
connection.setDoOutput(true);
connection.setRequestProperty("Content-Type",
"multipart/form-data; boundary=" + boundary);
connection.addRequestProperty("Accept-Encoding", "gzip,deflate"); //Bugfix by AS: needed for PHP.
connection.setUseCaches(true);
* Creates a new multipart POST HTTP request for a specified URL
* @param url the URL to send request to
* @throws IOException
public ClientHttpRequest(URL url) throws IOException {
this(url.openConnection());
public ClientHttpRequest() throws IOException {
* Creates a new multipart POST HTTP request for a specified URL string
* @param urlString the string representation of the URL to send request to
* @throws IOException
public ClientHttpRequest(String urlString) throws IOException {
this(new URL(urlString));
private void postCookies() {
StringBuffer cookieList = new StringBuffer();
for (Iterator i = cookies.entrySet().iterator(); i.hasNext();) {
Map.Entry entry = (Map.Entry)(i.next());
cookieList.append(entry.getKey().toString() + "=" + entry.getValue());
if (i.hasNext()) {
cookieList.append("; ");
if (cookieList.length() > 0) {
connection.setRequestProperty("Cookie", cookieList.toString());
* adds a cookie to the requst
* @param name cookie name
* @param value cookie value
* @throws IOException
public void setCookie(String name, String value) throws IOException {
cookies.put(name, value);
* adds cookies to the request
* @param cookies the cookie "name-to-value" map
* @throws IOException
public void setCookies(Map cookies) throws IOException {
if (cookies == null) return;
this.cookies.putAll(cookies);
* adds cookies to the request
* @param cookies array of cookie names and values (cookies[2*i] is a name, cookies[2*i + 1] is a value)
* @throws IOException
public void setCookies(String[] cookies) throws IOException {
if (cookies == null) return;
for (int i = 0; i < cookies.length - 1; i+=2) {
setCookie(cookies, cookies[i+1]);
private void writeName(String name) throws IOException {
newline();
write("Content-Disposition: form-data; name=\"");
write(name);
write('"');
* adds a string parameter to the request
* @param name parameter name
* @param value parameter value
* @throws IOException
public void setParameter(String name, String value) throws IOException {
boundary();
writeName(name);
newline(); newline();
writeln(value);
private void pipe(InputStream in, OutputStream out) throws IOException {
//System.out.println("Output: "+out);
byte[] buf = new byte[5000];
int nread;
int navailable;
long total = 0; //Menge an Bytes bisher gesendet
int percentage = 0; //Percent done...
int oldpercent = 0;
synchronized (in) {
while((nread = in.read(buf, 0, buf.length)) >= 0) {
//Transfer
out.flush();
out.write(buf, 0, nread);
out.flush();
total += nread; //Wieviel bereits gesendet?
percentage = (int)( ( total * 100.0 ) / filesize );
//System.out.println("STAT_ sent: "+total+" total: "+filesize);
if(oldpercent < percentage){
//System.out.println("%: " + percentage);
SimpleDateFormat sdf = new SimpleDateFormat("HH:mm:ss");
String uhrzeit = sdf.format(new Date());
System.out.println(uhrzeit+": Bytes sent: "+total);
//listener.setProgressStatus(percentage);
oldpercent = percentage;
buf = null;
* adds a file parameter to the request
* @param name parameter name
* @param filename the name of the file
* @param is input stream to read the contents of the file from
* @throws IOException
public void setParameter(String name, String filename, InputStream is) throws IOException {
boundary();
writeName(name);
write("; filename=\"");
write(filename);
write('"');
newline();
write("Content-Type: ");
String type = connection.guessContentTypeFromName(filename);
if (type == null) type = "application/octet-stream";
writeln(type);
newline();
pipe(is, os);
newline();
* adds a file parameter to the request
* @param name parameter name
* @param file the file to upload
* @throws IOException
public void setParameter(String name, File file) throws IOException {
filesize = file.length();
setParameter(name, file.getPath(), new FileInputStream(file));
* adds a parameter to the request; if the parameter is a File, the file is uploaded, otherwise the string value of the parameter is passed in the request
* @param name parameter name
* @param object parameter value, a File or anything else that can be stringified
* @throws IOException
public void setParameter(String name, Object object) throws IOException {
if (object instanceof File) {
setParameter(name, (File) object);
} else {
setParameter(name, object.toString());
* adds parameters to the request
* @param parameters "name-to-value" map of parameters; if a value is a file, the file is uploaded, otherwise it is stringified and sent in the request
* @throws IOException
public void setParameters(Map parameters) throws IOException {
if (parameters == null) return;
for (Iterator i = parameters.entrySet().iterator(); i.hasNext();) {
Map.Entry entry = (Map.Entry)i.next();
setParameter(entry.getKey().toString(), entry.getValue());
* adds parameters to the request
* @param parameters array of parameter names and values (parameters[2*i] is a name, parameters[2*i + 1] is a value); if a value is a file, the file is uploaded, otherwise it is stringified and sent in the request
* @throws IOException
public void setParameters(Object[] parameters) throws IOException {
if (parameters == null) return;
for (int i = 0; i < parameters.length - 1; i+=2) {
setParameter(parameters[i].toString(), parameters[i+1]);
* posts the requests to the server, with all the cookies and parameters that were added
* @return input stream with the server response
* @throws IOException
public InputStream post() throws IOException {
boundary();
writeln("--");
os.close();
return connection.getInputStream();
* posts the requests to the server, with all the cookies and parameters that were added before (if any), and with parameters that are passed in the argument
* @param parameters request parameters
* @return input stream with the server response
* @throws IOException
* @see setParameters
public InputStream post(Map parameters) throws IOException {
setParameters(parameters);
return post();
* posts the requests to the server, with all the cookies and parameters that were added before (if any), and with parameters that are passed in the argument
* @param parameters request parameters
* @return input stream with the server response
* @throws IOException
* @see setParameters
public InputStream post(Object[] parameters) throws IOException {
setParameters(parameters);
return post();
* posts the requests to the server, with all the cookies and parameters that were added before (if any), and with cookies and parameters that are passed in the arguments
* @param cookies request cookies
* @param parameters request parameters
* @return input stream with the server response
* @throws IOException
* @see setParameters
* @see setCookies
public InputStream post(Map cookies, Map parameters) throws IOException {
setCookies(cookies);
setParameters(parameters);
return post();
* posts the requests to the server, with all the cookies and parameters that were added before (if any), and with cookies and parameters that are passed in the arguments
* @param cookies request cookies
* @param parameters request parameters
* @return input stream with the server response
* @throws IOException
* @see setParameters
* @see setCookies
public InputStream post(String[] cookies, Object[] parameters) throws IOException {
setCookies(cookies);
setParameters(parameters);
return post();
* post the POST request to the server, with the specified parameter
* @param name parameter name
* @param value parameter value
* @return input stream with the server response
* @throws IOException
* @see setParameter
public InputStream post(String name, Object value) throws IOException {
setParameter(name, value);
return post();
* post the POST request to the server, with the specified parameters
* @param name1 first parameter name
* @param value1 first parameter value
* @param name2 second parameter name
* @param value2 second parameter value
* @return input stream with the server response
* @throws IOException
* @see setParameter
public InputStream post(String name1, Object value1, String name2, Object value2) throws IOException {
setParameter(name1, value1);
return post(name2, value2);
* post the POST request to the server, with the specified parameters
* @param name1 first parameter name
* @param value1 first parameter value
* @param name2 second parameter name
* @param value2 second parameter value
* @param name3 third parameter name
* @param value3 third parameter value
* @return input stream with the server response
* @throws IOException
* @see setParameter
public InputStream post(String name1, Object value1, String name2, Object value2, String name3, Object value3) throws IOException {
setParameter(name1, value1);
return post(name2, value2, name3, value3);
* post the POST request to the server, with the specified parameters
* @param name1 first parameter name
* @param value1 first parameter value
* @param name2 second parameter name
* @param value2 second parameter value
* @param name3 third parameter name
* @param value3 third parameter value
* @param name4 fourth parameter name
* @param value4 fourth parameter value
* @return input stream with the server response
* @throws IOException
* @see setParameter
public InputStream post(String name1, Object value1, String name2, Object value2, String name3, Object value3, String name4, Object value4) throws IOException {
setParameter(name1, value1);
return post(name2, value2, name3, value3, name4, value4);
* posts a new request to specified URL, with parameters that are passed in the argument
* @param parameters request parameters
* @return input stream with the server response
* @throws IOException
* @see setParameters
public InputStream post(URL url, Map parameters) throws IOException {
return new ClientHttpRequest(url).post(parameters);
* posts a new request to specified URL, with parameters that are passed in the argument
* @param parameters request parameters
* @return input stream with the server response
* @throws IOException
* @see setParameters
public InputStream post(URL url, Object[] parameters) throws IOException {
return new ClientHttpRequest(url).post(parameters);
* posts a new request to specified URL, with cookies and parameters that are passed in the argument
* @param cookies request cookies
* @param parameters request parameters
* @return input stream with the server response
* @throws IOException
* @see setCookies
* @see setParameters
public InputStream post(URL url, Map cookies, Map parameters) throws IOException {
return new ClientHttpRequest(url).post(cookies, parameters);
* posts a new request to specified URL, with cookies and parameters that are passed in the argument
* @param cookies request cookies
* @param parameters request parameters
* @return input stream with the server response
* @throws IOException
* @see setCookies
* @see setParameters
public InputStream post(URL url, String[] cookies, Object[] parameters) throws IOException {
return new ClientHttpRequest(url).post(cookies, parameters);
* post the POST request specified URL, with the specified parameter
* @param name parameter name
* @param value parameter value
* @return input stream with the server response
* @throws IOException
* @see setParameter
public InputStream post(URL url, String name1, Object value1) throws IOException {
return new ClientHttpRequest(url).post(name1, value1);
* post the POST request to specified URL, with the specified parameters
* @param name1 first parameter name
* @param value1 first parameter value
* @param name2 second parameter name
* @param value2 second parameter value
* @return input stream with the server response
* @throws IOException
* @see setParameter
public InputStream post(URL url, String name1, Object value1, String name2, Object value2) throws IOException {
return new ClientHttpRequest(url).post(name1, value1, name2, value2);
* post the POST request to specified URL, with the specified parameters
* @param name1 first parameter name
* @param value1 first parameter value
* @param name2 second parameter name
* @param value2 second parameter value
* @param name3 third parameter name
* @param value3 third parameter value
* @return input stream with the server response
* @throws IOException
* @see setParameter
public InputStream post(URL url, String name1, Object value1, String name2, Object value2, String name3, Object value3) throws IOException {
return new ClientHttpRequest(url).post(name1, value1, name2, value2, name3, value3);
* post the POST request to specified URL, with the specified parameters
* @param name1 first parameter name
* @param value1 first parameter value
* @param name2 second parameter name
* @param value2 second parameter value
* @param name3 third parameter name
* @param value3 third parameter value
* @param name4 fourth parameter name
* @param value4 fourth parameter value
* @return input stream with the server response
* @throws IOException
* @see setParameter
public InputStream post(URL url, String name1, Object value1, String name2, Object value2, String name3, Object value3, String name4, Object value4) throws IOException {
return new ClientHttpRequest(url).post(name1, value1, name2, value2, name3, value3, name4, value4);
With best regards!
MfG
Inno
Message was edited by:
Innocentus -
Sending command apdu with a byte array as CDATA
Hi,
I am learning java card as part of my final year project. So far I think I can do most of the basic things but I have got stuck at one particular point.
I know that there are different constructors for creating a command apdu object and a number of these constructors take an array of bytes as CDATA values.
My problem is, how to access this array of data in the card side because apdu.getBuffer() returns an array of integers (bytes)? And what is actually on apdu.getBuffer()[ISO7816.OFFSET_CDATA)] location when you send command apdu object using such a constuctor?
regards
Edited by: 992194 on 06-Mar-2013 06:12992194 wrote:
(..) I should have mentioned earlier that my card use jc 2.2.1 version, and i have read from different places that this version does not support ExtendedLength facility.Indeed.
Also I understand the semantics of apdu.getBuffer()[ISO7816.OFFSET_CDATA] that is the first byte of the command data. My question is, this command data was initially supplied as an array of bytes. Something like this:
+new CommandAPDU(CLA, INS, P1, P2, DATA_ARRAY, Le)+
So when you call:
byte [] buffer = apdu.getBuffer()
So does this mean that the byte values inside DATA_ARRAY automatically occupy locations +buffer[ISO7816.OFFSET_CDATA]+ onwards inside the buffer?Yes. The length would be<tt> (short)(buffer[ISO7816.OFFSET_LC]&0xFF) </tt>. Notice the<tt> &0xFF </tt> is a must above 127 bytes.
Or their is a mechanism of extracting the DATA_ARRAY array itself?No.
In fact, in the interest of performance and portability in environments with little memory, the usual coding style is to pass<tt> buffer </tt>, an offset within that, and the length; rather than making an object, which would require a copy. Welcome to the real world of Java Card. -
Deployed EJB.jar has some classes with zero bytes
Has anyone hit a situation where Windows Jdev 902 puts class files into the ....EJB.jar file with zero bytes? In my case, the eight zero length classes are: seven My...Row.class files and the MyApplicationModule.class file. They all are in the /mypackage/common/ package and are in the ...classes/mypackage/common/ file folder.
I can use PKZIP to delete the zero length classes and add them back into the .jar and everything works as expected.
There are no error messages generated during the deployment that creates the .jar file that has the problem.This was posted on another thread by jdev team...
We have been following this issue in support TAR 2274825.996. I sent some technical detail to the support rep on July 10, but it looks like that information never got added to the TAR. Well, FWIW, here it is:
The user may be running out of open files. The
stdio library which underlies Win32 programs (like the JVM) has a
limit on the number of files that can be open concurrently. The
limit is around 2000-2100 open files (I tested this on NT, 2000, and
XP), and the limit is on a per-process basis. If the user is running
into this limit because JDev has too many open files, then the utility
methods we use to open JAR files or other streams could be receiving
a java.io.FileNotFoundException exception with the message "Too many
open files". To verify this: go to the Windows Task Manager, go to
View | Select Columns... and be sure the "Handle Count" checkbox is
checked. Does the jdev.exe process have a disproportionately higher
number of handles than other processes? If the Handle Count is above
2000 (approx), JDev might be running out of open files. (I say "might" because
the Handle Count is for many different kinds of Win32 kernel objects,
not just for file handles.) If the Handle Count is the problem, then
it would explain the transient, nondeterministic behavior that the
user is reporting. Because the Handle limit is per-process, it would
explain why the user is able to use PKZIP or WinZip to repair the JAR
file. Try closing editors before deploying and see if that helps.
If the user confirms that the Handle Count is excessive, then we may
have a Handle leak of some kind in the product that will need to be
fixed.
Also try running JDev using "jdev -hotspot" on the command line instead
of just "jdev" and see if the behavior changes.
Hope that helps. We are monitoring the TAR, but no one has been able to reproduce the problem you are reporting, even with the files attached to the TAR, and yours is the only report so far that we've received about this specific problem.
I have added several more entities and views into app module. As I added each one, the list of zero byte jar classes would change, but always about 7-8 bad ones. Now that I am no longer adding new entities/views, the problem has become stable and repeatable. The same classes ALWAYS show up with zero bytes.
I monitored jdevw.exe while doing the deploy, and the number of handles never went above 675. I ran with jdevw.exe -hotspot and the results were exactly the same.
thanks,
Roger -
Having issues finding out how many bytes are sent/recieved from a socket.
Hello everyone.
I've searched the forums and also google and it seems I can't find a way to figure out how many bytes are sent from a socket and then how many bytes are read in from a socket.
My server program accepts a string (an event) and I parse that string up, gathering the relevant information and I need to send it to another server for more processing.
Inside my server program after receiving the data ( a string) I then open another port and send it off to the other server. But I would like to know how many bytes I send from my server to the other server via the client socket.
So at the end of the connection I can compare the lengths to make sure, I sent as many bytes as the server on the other end received.
Here's my run() function in my server program (my server is multi threaded, so on each new client connection it spawns a new thread and does the following):
NOTE: this line is where it sends the string to the other server:
//sending the string version of the message object to the
//output server
out.println(msg.toString());
//SERVER
public class MultiThreadServer implements Runnable {
Socket csocket;
MultiThreadServer(Socket csocket) {
this.csocket = csocket;
public void run() {
//setting up sockets
Socket outputServ = null;
//create a message database to store events
MessageDB testDB = new MessageDB();
try {
//setting up channel to recieve events from the omnibus server
BufferedReader in = new BufferedReader(new InputStreamReader(
csocket.getInputStream()));
//This socket will be used to send events to the z/OS reciever
//we will need a new socket each time because this is a multi-threaded
//server thus, the z/OS reciever (outputServ) will need to be
//multi threaded to handle all the output.
outputServ = new Socket("localhost", 1234);
//Setting up channel to send data to outputserv
PrintWriter out = new PrintWriter(new OutputStreamWriter(outputServ
.getOutputStream()));
String input;
//accepting events from omnibus server and storing them
//in a string for later processing.
while ((input = in.readLine()) != null) {
//accepting and printing out events from omnibus server
//also printing out connected client information
System.out.println("Event from: "
+ csocket.getInetAddress().getHostName() + "-> "
+ input + "\n");
System.out.println("Waiting for data...");
//---------putting string into a message object-------------///
// creating a scanner to parse
Scanner scanner = new Scanner(input);
Scanner scannerPop = new Scanner(input);
//Creating a new message to hold information
Message msg = new Message();
//place Scanner object here:
MessageParser.printTokens(scanner);
MessageParser.populateMessage(scannerPop, msg, input);
//calculating the length of the message once its populated with data
int length = msg.toString().length();
msg.SizeOfPacket = length;
//Printing test message
System.out.println("-------PRINTING MESSAGE BEFORE INSERT IN DB------\n");
System.out.println(msg.toString());
System.out.println("----------END PRINT----------\n");
//adding message to database
testDB.add(msg);
System.out.println("-------Accessing data from Map----\n");
testDB.print();
//---------------End of putting string into a message object----//
//sending the string version of the message object to the
//output server
out.println(msg.toString());
System.out.println("Waiting for data...");
out.flush();
//cleaning up
System.out.println("Connection closed by client.");
in.close();
out.close();
outputServ.close();
csocket.close();
catch (SocketException e) {
System.err.println("Socket error: " + e);
catch (UnknownHostException e) {
System.out.println("Unknown host: " + e);
} catch (IOException e) {
System.out.println("IOException: " + e);
}Heres the other server that is accepting the string:
public class MultiThreadServer implements Runnable {
Socket csocket;
MultiThreadServer(Socket csocket) {
this.csocket = csocket;
public void run() {
try {
//setting up channel to recieve events from the parser server
BufferedReader in = new BufferedReader(new InputStreamReader(
csocket.getInputStream()));
String input;
while ((input = in.readLine()) != null) {
//accepting and printing out events from omnibus server
//also printing out connected client information
System.out.println("Event from: "
+ csocket.getInetAddress().getHostName() + "-> "
+ input + "\n");
System.out.println("Lenght of the string was: " + input.length());
System.out.println("Waiting for data...");
//cleaning up
System.out.println("Connection closed by client.");
in.close();
csocket.close();
} catch (IOException e) {
System.out.println(e);
e.printStackTrace();
}Here's an example of the program works right now:
Someone sends me a string such as this:
Enter port to run server on:
5656
Listening on : ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=5656]
Waiting for client connection...
Socket[addr=/127.0.0.1,port=4919,localport=5656] connected.
hostname: localhost
Ip address: 127.0.0.1:5656
Waiting for data...
Event from: localhost-> UPDATE: "@busch2.raleigh.ibm.com->NmosPingFail1",424,"9.27.132.139","","Omnibus","Precision Monitor Probe","Precision Monitor","@busch2.raleigh.ibm.com->NmosPingFail",5,"Ping fail for 9.27.132.139: ICMP reply timed out",07/05/07 12:29:12,07/03/07 18:02:31,07/05/07 12:29:09,07/05/07 12:29:09,0,1,194,8000,0,"",65534,0,0,0,"NmosPingFail",0,0,0,"","",0,0,"",0,"0",120,1,"9.27.132.139","","","","dyn9027132107.raleigh.ibm.com","","","",0,0,"","","NCOMS",424,""
Now my program makes it all nice and filters out the junk and resends the new string to the other server running here:
Enter port to run server on:
1234
Listening on : ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=1234]
Waiting for client connection...
Socket[addr=/127.0.0.1,port=4920,localport=1234] connected.
Parser client connected.
hostname: localhost
Ip address: 127.0.0.1:1234
Event from: localhost-> PacketType: UPDATE , SizeOfPacket: 577 , PacketID: 1, Identifer: UPDATE: "@busch2.raleigh.ibm.com->NmosPingFail1" , Serial: 424 , Node: "9.27.132.139" , NodeAlias: "" , Manager: "Omnibus" , Agent: "Precision Monitor Probe" , AlertGroup: "Precision Monitor" , AlertKey: "@busch2.raleigh.ibm.com->NmosPingFail" , Severity: 5 , Summary: "Ping fail for 9.27.132.139: ICMP reply timed out",StateChange: 07/05/07 12:29:12 , FirstOccurance: 07/03/07 18:02:31 , LastOccurance: 07/05/07 12:29:09 , InternalLast: 07/05/07 12:29:09 , EventId: "NmosPingFail" , LocalNodeAlias: "9.27.132.139"
Lenght of the string was: 579
The length of the final string I sent is 577 by using the string.length() function, but when I re-read the length after the send 2 more bytes got added, and now the length is 579.
I tested it for several cases and in all cases its adding 2 extra bytes.
Anyways, I think this is a bad solution to my problem but is the only one I could think of.
Any help would be great!(a) You are counting characters, not bytes, and you aren't counting the line terminators that are appended by println() and removed by readLine().
(b) You don't need to do any of this. TCP doesn't lose data. If the receiver manages get as far as reading the line terminator when reading a line, the line will be complete. Otherwise it will get an exception.
(c) You are assuming that the original input and the result of message.toString() after constructing a Message from 'input' are the same but there is no evidence to this effect in the code you've posted. Clearly this assumption is what is at fault.
(d) If you really want to count bytes, write yourself a FilterInputStream and a FilterOutputStream and wrap them around the socket streams before decorating them with the readers you are using. Have these classes count the bytes going past.
(e) Don't use PrintWriter or PrintStream on socket streams unless you like exceptions being ignored. Judging by your desire to count characters, you shouldn't like this at all. Use BufferedWriter's methods to write strings and line terminators. -
Performance Issue with Selection Screen Values
Hi,
I am facing a performance issue(seems like a performance issue ) in my project.
I have a query with some RKFs and sales area in filters (single value variable which is optional).
Query is by default restricted by current month.
The Cube on which the query operates has around 400,000 records for a month.
The Cube gets loaded every three hours
When I run the query with no filters I get the output within 10~15 secs.
The issue I am facing is that, when I enter a sales area in my selection screen the query gets stuck in the data selection step. In fact we are facing the same problem if we use one or two other characteristics in our selection screen
We have aggregates/indexes etc on our cube.
Has any one faced a similar situation?
Does any one have any comments on this ?
Your help will be appreciated. ThanksHi A R,
Goto RSRT--> Give ur query anme --> Execute =Debug
--> No a pop up ill come with many check boxes select "Display Aggregates found" option --> now give ur
selections in variable screen > first it will give the already existing aggregate names> continue> now after displaying all the aggregates it will display the list of objects realted to cube wise> try to copy these objects into notepad> again go with ur drill downs now u'll get the already existing aggregates for this drill down-> it will display the list of objects> copy them to notepad> now sort all the objects related to one cube by deleting duplicate objects in the note pad>goto that Infocube> context>maintain aggregates> create aggregate on the objects u copied into note pad.
now try to execyte the report... it should work properly with out delays for those selections.
I hope it helps you...
Regards,
Ramki. -
How to filter certificate templates in Certificate Authority snap-in with the correct values
How to filter certificate templates in Certificate Authority snap-in with the correct values
I have a 2012 R2 server running Microsoft Certificate Authority snap-in.
I want to do a filter on a specific Certificate Template which i know exists in the 'Issued Certificates' folder.
All the documentation i can find seems to suggest i copy the certificate name and use this in the View Filter.
1). I add the 'Certificate Template' option into the Field drop-down.
2). I leave the Operation as the '=' symbol
3). I paste in just the name of the template in question. for example: 'my computers'
The search results always come back blank 'There are no items to show in this view.' even when i know there are many instances of this template. I've tried on a win 2008 server and same issue.
Is there a correct value to enter for the Certificate Template name?
Can this be done easier using certutil commands?
When i run the certutil tool i can confirm i have several issued templates. Certutil -catemplates -v > c:\mytemplate_log.csv
Anybody know what i'm doing wrong?
I seem to be getting nowhere with this one.> But its important you are using the template name, not the display name
this is incorrect. OIDs are mapped to *display name*, not common name (it is true for all templates except Machine template). That is, in order to translate template name to a corresponding OID, you need to use certificate template's display name. And, IIRC,
template name in the filter can be used only for V1 templates. For V2 and higher, OID must be used.
My weblog: en-us.sysadmins.lv
PowerShell PKI Module: pspki.codeplex.com
PowerShell Cmdlet Help Editor pscmdlethelpeditor.codeplex.com
Check out new: SSL Certificate Verifier
Check out new:
PowerShell FCIV tool.
Maybe you are looking for
-
Upgrade error messages and wont sync
I upgraded, as normal, and afterwards it said my computer wasn't authorized and therefore it would not sync it. i also wouldn't let me authorize it.. everytime an error message popped up. i can't do anything with my phone and its been almost a week s
-
Hi Experts. We try to raise event in the ABAP-code to start Workflow chains in BW. To do this we call SWE_EVENT_CREATE function but it doesn't work correct (BW chains doesn't start). The source code of SWE_EVENT_CREATE in R3 and BW is not same, in BW
-
My clients business buys used equipment on the secondary market that they refurbish & sell. The prices fluctuate pretty dramatically based upon supply and demand; so it really isn't possible to set a standard cost that would be valid throughout the y
-
Question about Updating to CS3
Can I update to CS3 from the version 1.5?
-
Global Type declaration/definition of name duplicated error
Hi, All, We have two EJB exposed as the Webservice , so WS-1 and WS2, When I insert these two WS into the SOA Application to used by BPEL process. I get the compile error from jdev(11.1.1.1.3) like this: Global Type declaration/definition of name '{h