Nls_date_format to write milliseconds
I am able to write the date into the database tables till the seconds value using nls_date_format = "dd/mm/yy hh24:mi:ss" but how to write the milliseconds also. what is the nls_date_format for writing milliseconds?
Please reply asap.
thanks,
Anu.
I believe milliseconds is not supported yet (supposed to be in Oracle9<i>i</i>.
Gio
null
Similar Messages
-
Remove milliseconds from timestamp in write to measurement file
I'm logging data to a binary tdms file using the write to measurement file express vi. I choose my x axis to be time and see that an absolute timestamp is written with millisecond precision. I only need to-the-second precision. Is there anyway to change this default behavior?
Attachments:
remove.png 10 KB
remove.png 10 KBWhere is your data coming from. When I open the express vi to look, it looks like the time format doesn't determine that, but the
signal data cluster coming in does. So where does the data come from, and in what form?
Putnam
Certified LabVIEW Developer
Senior Test Engineer
Currently using LV 6.1-LabVIEW 2012, RT8.5
LabVIEW Champion -
Hello ,
we are currently experiencing heavy I/O problmes perfoming prrof of concept
testig for one of our customers. Our setup is as follows:
HP ProLiant DL380 with 24GB Ram and 8 15k 72GB SAS drives
An HP P400 Raid controller with 256MB cache in RAID0 mode was used.
Win 2k8r2 was installed on c (a physical Drive) and the database on E
(= two physical drives in RAID0 128k Strip Size)
With the remaining 5 drives read and write tests were performed using raid 0 with variing number of drives.
I/O performance, as measured with ATTO Disk benchmark, increased as expected linear with the number of drives used.
We expected to see this increased performance in the database, too and performed the following tests:
- with 3 different tables the full table scan (FTS) (Hint: /*+ FULL (s) NOCACHE (s) */)
- a CTAS statement.
The system was used exclusively for testing.
The used tables:
Table 1: 312 col, 12,248 MB, 11,138,561 rows, avg len 621 bytes
Table 2: 159 col, 4288 MB, 5,441,171 rows, avg len 529 bytes
Table 3: 118 col, 360MB, 820,259 rows, avg len 266 bytes
The FTS has improved as expected. With 5 physical drives in a RAID0, a performance of
420MB/s was achieved.
In the write test on the other hand we were not able to archieve any improvement.
The CTAS statement always works with about 5000 - 6000 BLOCK/s (80MB/s)
But when we tried running several CTAS statements in different sessions, the overall speed increased as expected.
Further tests showed that the write speed seems to depend also on the number of columns. 80MB/s were only
possible with Tables 2 and 3. With Table 1, however only 30MB/s were measured.
Is this maybe just an incorrectly set parameter?
What we already tried:
- change the number of db_writer_processes 4 and then to 8
- Manual configuration of PGA and SGA size
- setting DB_BLOCK_SIZE to 16k
- FILESYSTEMIO_OPTIONS set to setall
- checking that Resource Manager are really disabled
Thanks for any help.
V$PARAMETERS
1 lock_name_space
2 processes 150
3 sessions 248
4 timed_statistics TRUE
5 timed_os_statistics 0
6 resource_limit FALSE
7 license_max_sessions 0
8 license_sessions_warning 0
9 cpu_count 8
10 instance_groups
11 event
12 sga_max_size 14495514624
13 use_large_pages TRUE
14 pre_page_sga FALSE
15 shared_memory_address 0
16 hi_shared_memory_address 0
17 use_indirect_data_buffers FALSE
18 lock_sga FALSE
19 processor_group_name
20 shared_pool_size 0
21 large_pool_size 0
22 java_pool_size 0
23 streams_pool_size 0
24 shared_pool_reserved_size 93952409
25 java_soft_sessionspace_limit 0
26 java_max_sessionspace_size 0
27 spfile C:\ORACLE\PRODUCT\11.2.0\DBHOME_1\DATABASE\SPFILEORATEST.ORA
28 instance_type RDBMS
29 nls_language AMERICAN
30 nls_territory AMERICA
31 nls_sort
32 nls_date_language
33 nls_date_format
34 nls_currency
35 nls_numeric_characters
36 nls_iso_currency
37 nls_calendar
38 nls_time_format
39 nls_timestamp_format
40 nls_time_tz_format
41 nls_timestamp_tz_format
42 nls_dual_currency
43 nls_comp BINARY
44 nls_length_semantics BYTE
45 nls_nchar_conv_excp FALSE
46 fileio_network_adapters
47 filesystemio_options
48 clonedb FALSE
49 disk_asynch_io TRUE
50 tape_asynch_io TRUE
51 dbwr_io_slaves 0
52 backup_tape_io_slaves FALSE
53 resource_manager_cpu_allocation 8
54 resource_manager_plan
55 cluster_interconnects
56 file_mapping FALSE
57 gcs_server_processes 0
58 active_instance_count
59 sga_target 14495514624
60 memory_target 0
61 memory_max_target 0
62 control_files E:\ORACLE\ORADATA\ORATEST\CONTROL01.CTL, C:\ORACLE\FAST_RECOVERY_AREA\ORATEST\CONTROL02.CTL
63 db_file_name_convert
64 log_file_name_convert
65 control_file_record_keep_time 7
66 db_block_buffers 0
67 db_block_checksum TYPICAL
68 db_ultra_safe OFF
69 db_block_size 8192
70 db_cache_size 0
71 db_2k_cache_size 0
72 db_4k_cache_size 0
73 db_8k_cache_size 0
74 db_16k_cache_size 0
75 db_32k_cache_size 0
76 db_keep_cache_size 0
77 db_recycle_cache_size 0
78 db_writer_processes 1
79 buffer_pool_keep
80 buffer_pool_recycle
81 db_flash_cache_file
82 db_flash_cache_size 0
83 db_cache_advice ON
84 compatible 11.2.0.0.0
85 log_archive_dest_1
86 log_archive_dest_2
87 log_archive_dest_3
88 log_archive_dest_4
89 log_archive_dest_5
90 log_archive_dest_6
91 log_archive_dest_7
92 log_archive_dest_8
93 log_archive_dest_9
94 log_archive_dest_10
95 log_archive_dest_11
96 log_archive_dest_12
97 log_archive_dest_13
98 log_archive_dest_14
99 log_archive_dest_15
100 log_archive_dest_16
101 log_archive_dest_17
102 log_archive_dest_18
103 log_archive_dest_19
104 log_archive_dest_20
105 log_archive_dest_21
106 log_archive_dest_22
107 log_archive_dest_23
108 log_archive_dest_24
109 log_archive_dest_25
110 log_archive_dest_26
111 log_archive_dest_27
112 log_archive_dest_28
113 log_archive_dest_29
114 log_archive_dest_30
115 log_archive_dest_31
116 log_archive_dest_state_1 enable
117 log_archive_dest_state_2 enable
118 log_archive_dest_state_3 enable
119 log_archive_dest_state_4 enable
120 log_archive_dest_state_5 enable
121 log_archive_dest_state_6 enable
122 log_archive_dest_state_7 enable
123 log_archive_dest_state_8 enable
124 log_archive_dest_state_9 enable
125 log_archive_dest_state_10 enable
126 log_archive_dest_state_11 enable
127 log_archive_dest_state_12 enable
128 log_archive_dest_state_13 enable
129 log_archive_dest_state_14 enable
130 log_archive_dest_state_15 enable
131 log_archive_dest_state_16 enable
132 log_archive_dest_state_17 enable
133 log_archive_dest_state_18 enable
134 log_archive_dest_state_19 enable
135 log_archive_dest_state_20 enable
136 log_archive_dest_state_21 enable
137 log_archive_dest_state_22 enable
138 log_archive_dest_state_23 enable
139 log_archive_dest_state_24 enable
140 log_archive_dest_state_25 enable
141 log_archive_dest_state_26 enable
142 log_archive_dest_state_27 enable
143 log_archive_dest_state_28 enable
144 log_archive_dest_state_29 enable
145 log_archive_dest_state_30 enable
146 log_archive_dest_state_31 enable
147 log_archive_start FALSE
148 log_archive_dest
149 log_archive_duplex_dest
150 log_archive_min_succeed_dest 1
151 standby_archive_dest %ORACLE_HOME%\RDBMS
152 fal_client
153 fal_server
154 log_archive_trace 0
155 log_archive_config
156 log_archive_local_first TRUE
157 log_archive_format ARC%S_%R.%T
158 redo_transport_user
159 log_archive_max_processes 4
160 log_buffer 32546816
161 log_checkpoint_interval 0
162 log_checkpoint_timeout 1800
163 archive_lag_target 0
164 db_files 200
165 db_file_multiblock_read_count 128
166 read_only_open_delayed FALSE
167 cluster_database FALSE
168 parallel_server FALSE
169 parallel_server_instances 1
170 cluster_database_instances 1
171 db_create_file_dest
172 db_create_online_log_dest_1
173 db_create_online_log_dest_2
174 db_create_online_log_dest_3
175 db_create_online_log_dest_4
176 db_create_online_log_dest_5
177 db_recovery_file_dest c:\oracle\fast_recovery_area
178 db_recovery_file_dest_size 4322230272
179 standby_file_management MANUAL
180 db_unrecoverable_scn_tracking TRUE
181 thread 0
182 fast_start_io_target 0
183 fast_start_mttr_target 0
184 log_checkpoints_to_alert FALSE
185 db_lost_write_protect NONE
186 recovery_parallelism 0
187 db_flashback_retention_target 1440
188 dml_locks 1088
189 replication_dependency_tracking TRUE
190 transactions 272
191 transactions_per_rollback_segment 5
192 rollback_segments
193 undo_management AUTO
194 undo_tablespace UNDOTBS1
195 undo_retention 900
196 fast_start_parallel_rollback LOW
197 resumable_timeout 0
198 instance_number 0
199 db_block_checking FALSE
200 recyclebin on
201 db_securefile PERMITTED
202 create_stored_outlines
203 serial_reuse disable
204 ldap_directory_access NONE
205 ldap_directory_sysauth no
206 os_roles FALSE
207 rdbms_server_dn
208 max_enabled_roles 150
209 remote_os_authent FALSE
210 remote_os_roles FALSE
211 sec_case_sensitive_logon TRUE
212 O7_DICTIONARY_ACCESSIBILITY FALSE
213 remote_login_passwordfile EXCLUSIVE
214 license_max_users 0
215 audit_sys_operations FALSE
216 global_context_pool_size
217 db_domain
218 global_names FALSE
219 distributed_lock_timeout 60
220 commit_point_strength 1
221 global_txn_processes 1
222 instance_name oratest
223 service_names ORATEST
224 dispatchers (PROTOCOL=TCP) (SERVICE=ORATESTXDB)
225 shared_servers 1
226 max_shared_servers
227 max_dispatchers
228 circuits
229 shared_server_sessions
230 local_listener
231 remote_listener
232 listener_networks
233 cursor_space_for_time FALSE
234 session_cached_cursors 50
235 remote_dependencies_mode TIMESTAMP
236 utl_file_dir
237 smtp_out_server
238 plsql_v2_compatibility FALSE
239 plsql_warnings DISABLE:ALL
240 plsql_code_type INTERPRETED
241 plsql_debug FALSE
242 plsql_optimize_level 2
243 plsql_ccflags
244 plscope_settings identifiers:none
245 permit_92_wrap_format TRUE
246 java_jit_enabled TRUE
247 job_queue_processes 1000
248 parallel_min_percent 0
249 create_bitmap_area_size 8388608
250 bitmap_merge_area_size 1048576
251 cursor_sharing EXACT
252 result_cache_mode MANUAL
253 parallel_min_servers 0
254 parallel_max_servers 135
255 parallel_instance_group
256 parallel_execution_message_size 16384
257 hash_area_size 131072
258 result_cache_max_size 72482816
259 result_cache_max_result 5
260 result_cache_remote_expiration 0
261 audit_file_dest C:\ORACLE\ADMIN\ORATEST\ADUMP
262 shadow_core_dump none
263 background_core_dump partial
264 background_dump_dest c:\oracle\diag\rdbms\oratest\oratest\trace
265 user_dump_dest c:\oracle\diag\rdbms\oratest\oratest\trace
266 core_dump_dest c:\oracle\diag\rdbms\oratest\oratest\cdump
267 object_cache_optimal_size 102400
268 object_cache_max_size_percent 10
269 session_max_open_files 10
270 open_links 4
271 open_links_per_instance 4
272 commit_write
273 commit_wait
274 commit_logging
275 optimizer_features_enable 11.2.0.3
276 fixed_date
277 audit_trail DB
278 sort_area_size 65536
279 sort_area_retained_size 0
280 cell_offload_processing TRUE
281 cell_offload_decryption TRUE
282 cell_offload_parameters
283 cell_offload_compaction ADAPTIVE
284 cell_offload_plan_display AUTO
285 db_name ORATEST
286 db_unique_name ORATEST
287 open_cursors 300
288 ifile
289 sql_trace FALSE
290 os_authent_prefix OPS$
291 optimizer_mode ALL_ROWS
292 sql92_security FALSE
293 blank_trimming FALSE
294 star_transformation_enabled TRUE
295 parallel_degree_policy MANUAL
296 parallel_adaptive_multi_user TRUE
297 parallel_threads_per_cpu 2
298 parallel_automatic_tuning FALSE
299 parallel_io_cap_enabled FALSE
300 optimizer_index_cost_adj 100
301 optimizer_index_caching 0
302 query_rewrite_enabled TRUE
303 query_rewrite_integrity enforced
304 pga_aggregate_target 4831838208
305 workarea_size_policy AUTO
306 optimizer_dynamic_sampling 2
307 statistics_level TYPICAL
308 cursor_bind_capture_destination memory+disk
309 skip_unusable_indexes TRUE
310 optimizer_secure_view_merging TRUE
311 ddl_lock_timeout 0
312 deferred_segment_creation TRUE
313 optimizer_use_pending_statistics FALSE
314 optimizer_capture_sql_plan_baselines FALSE
315 optimizer_use_sql_plan_baselines TRUE
316 parallel_min_time_threshold AUTO
317 parallel_degree_limit CPU
318 parallel_force_local FALSE
319 optimizer_use_invisible_indexes FALSE
320 dst_upgrade_insert_conv TRUE
321 parallel_servers_target 128
322 sec_protocol_error_trace_action TRACE
323 sec_protocol_error_further_action CONTINUE
324 sec_max_failed_login_attempts 10
325 sec_return_server_release_banner FALSE
326 enable_ddl_logging FALSE
327 client_result_cache_size 0
328 client_result_cache_lag 3000
329 aq_tm_processes 1
330 hs_autoregister TRUE
331 xml_db_events enable
332 dg_broker_start FALSE
333 dg_broker_config_file1 C:\ORACLE\PRODUCT\11.2.0\DBHOME_1\DATABASE\DR1ORATEST.DAT
334 dg_broker_config_file2 C:\ORACLE\PRODUCT\11.2.0\DBHOME_1\DATABASE\DR2ORATEST.DAT
335 olap_page_pool_size 0
336 asm_diskstring
337 asm_preferred_read_failure_groups
338 asm_diskgroups
339 asm_power_limit 1
340 control_management_pack_access DIAGNOSTIC+TUNING
341 awr_snapshot_time_offset 0
342 sqltune_category DEFAULT
343 diagnostic_dest C:\ORACLE
344 tracefile_identifier
345 max_dump_file_size unlimited
346 trace_enabled TRUE961262 wrote:
The used tables:
Table 1: 312 col, 12,248 MB, 11,138,561 rows, avg len 621 bytes
Table 2: 159 col, 4288 MB, 5,441,171 rows, avg len 529 bytes
Table 3: 118 col, 360MB, 820,259 rows, avg len 266 bytes
The FTS has improved as expected. With 5 physical drives in a RAID0, a performance of
420MB/s was achieved.
In the write test on the other hand we were not able to archieve any improvement.
The CTAS statement always works with about 5000 - 6000 BLOCK/s (80MB/s)
But when we tried running several CTAS statements in different sessions, the overall speed increased as expected.
Further tests showed that the write speed seems to depend also on the number of columns. 80MB/s were only
possible with Tables 2 and 3. With Table 1, however only 30MB/s were measured.
If multiple CTAS can produce higher throughput on writes this tells you that it is the production of the data that is the limit, not the writing. Notice in your example that nearly 75% of the time of the CTAS as CPU, not I/O.
The thing about number of columns is that table 1 has exceeded the critical 254 limit - this means Oracle has chained all the rows internally into two pieces; this introduces lots of extra CPU-intensive operations (consistent gets, table access by rowid, heap block compress) so that the CPU time could have gone up significantly, resulting in a lower throughput that you are interpreting as a write problem.
One other thought - if you are currently doing CTAS by "create as select from {real SAP table}" there may be other side effects that you're not going to see. I would do "create test clone of real SAP table", then "create as select from clone" to try and eliminate any such anomalies.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: <b><em>Oracle Core</em></b> -
Timing of DAQmx Acquisition Vi and data write
I am acquiring data from a DAQ rack, using a 6251 PCI card. after acquiring a data set, i write it to a binary file. afterwards, i do a small calculation on one channel value and apply a condition on this value. the outcome of which is connected to the while loop surrounding the acquisition. So i (1.)grab 100 points, (2.) write, calc. and cond., and (3.) start over again. I stuck millisecond timers in between steps 1, 2, and 3 to determine if i was incurring much delay. So, if i have no delay, and i'm sampling at 100 Hz, the differences between the millisecond timing values (stuck into an array) should be
1000
0
0
1000
0
0
1000
0
0
etc.
This is usually the case. However, sometimes i get something like this
900
100
0
900
100
0
or
980
20
0
980
20
0
So if i measure a delay as a result of the writing/calculations/conditing, this time is subtracted from the acquisition time interval. I don't really understand this at all. Why a delay after the acquisition would speed up the acquisition, i have no idea. My only guess is that i start writing/calculating/conditioning before acquision has finished, but why would it do that? I've attached basically the setup. Any help would be great.
Attachments:
Concept.vi 61 KB...what is the "hw?"
That's short for "hardware." Specificially, in a buffered acquisition the data is sent into your memory buffer at regular times regulated by a hardware clock found in your data acq board. Your software interacts with this memory buffer via 'Read' calls.
...worry about delay?
That'll depend on your app. As you've described it, you Read 1 second worth of data all at once then do some processing and storage. As long as you can do that processing and get back around the loop in less than one second, then the processing is not a bottleneck. The speed is being regulated by the amount of data you request and the time it takes to acquire it.
If this behavior is sufficient for you, then no, you don't have to worry about the delay.
Other apps may need to read smaller amounts of data and make decisions more quickly than once per second. Or they may need to instantly read the most recent data without waiting for any and then quickly make a decision. These sorts of apps might cause you to worry about the processing time.
-Kevin P. -
I'm trying to take spreadsheet data and write it to individual traces inside DSC2012 to a Citadel 5 database. I keep getting an error -1967386570 Data has Back in time timestamp.
Searching the NI website, back in 2006 there was a way to do this with a vi server.
http://www.ni.com/white-paper/3485/en
Is this still possible with the current DSC version?
From the 2012 DSC help file.
Writing a Value to a Citadel Trace (DSC Module)
You can use the Write Trace VI to append a data point to a Citadel trace. Complete the following steps to write a value:
Add the Write Trace VI on the block diagram.
Add Find
Wire the trace reference output of the Open Trace VI to the trace referenceinput of the Write Trace VI.
Wire the value and timestamp inputs of the Write Trace VI. Leave the timestamp input unwired to use the current time. The Write Trace VI fails if the timestamp input is earlier than the timestamp of the last point written to the trace. You can determine the timestamp of the last point in the trace using the Get Trace Info VI.
So, is it no longer possible to write old data into Citadel traces?
I also saw some posts about a registry key for Citadel 5 about server timestamps, but I don't see a registry key where that note says it should be located.
Logging Back-in-Time
Most data logging systems generate ever-increasing time stamps. However, if you manually set the system clock back-in-time, or if an automatic time synchronization service resets the system clock during logging, a back-in-time data point might be logged. Citadel handles this case in two ways.
When a point is logged back-in-time, Citadel checks to see if the difference between the point time stamp and the last time stamp in the trace is less than the larger of the global back-in-time tolerance and the time precision of the subtrace. If the time is within the tolerance, Citadel ignores the difference and logs the point using the last time stamp in the trace. For example, the Shared Variable Engine in LabVIEW 8.0 and later uses a tolerance level of 10 seconds. Thus, if the system clock is set backwards up to ten seconds from the previous time stamp, a value is logged in the database on a data change, but the time stamp is set equal to the previous logged point. If the time is set backwards farther than 10 seconds, Citadel creates a new subtrace and begins logging from that time stamp.
Beginning with LabVIEW DSC 8.0, you can define a global back-in-time tolerance in the system registry. Earlier versions of DSC or Lookout always log back-in-time points. Use the backInTimeToleranceMS key located in the HKLM\SOFTWARE\National Instruments\Citadel\5.0 directory. Specify this value in milliseconds. The default value is 0, which indicates no global tolerance.
This key doesn't exist on my system.
This link from July 2012 seems to mention that it is still possible to use custom timestamps.
http://www.ni.com/white-paper/6579/en
Citadel Writing API
The DSC Module 8.0 and later include an API for writing data directly to a Citadel trace. This API is useful to perform the following operations:
· Implement a data redundancy system for LabVIEW Real-Time targets.
· Record data in a Citadel trace faster than can be achieved with a shared variable.
· Write trace data using custom time stamps.
The Citadel writing API inserts trace data point-by-point with either user-specified or server-generated time stamps.
Is there some more documentation out there that explains this process a bit better?Hi unclebump,
I have been trying to determine what the best course of action would be and I think you need to move the data to a new trace. What I am thinking is for you to open a reference to the trace as it currently exists. Then you will need to read in all the data of that trace. While you read that trace you should also be reading in the data from your file. Once you have both sets of data you will need to iterate over all the data and merge the two sets of data based off their timestamps. The VIs to accomplish this should all exist in the DSC Palette >> Historical or DSC >> Historical >> Database Writing. There is a writing example in the example finder that is called Database Direct Write Demo that would probably be worth looking at. The write trace help says, "
This VI returns an error if you try to write a point with a timestamp that is earlier than the timestamp of the last point written to the trace." which means that if your data is merged and written in order you should not get this error.
Hope this helps and let me know if you have any questions.
Patrick H | National Instruments | Software Engineer -
The first binary file write operation for a new file takes progressively longer.
I have an application in which I am acquiring analog data from multiple
PXI-6031E DAQ boards and then writing that data to FireWire hard disks
over an extended time period (14 days). I am using a PXI-8145RT
controller, a PXI-8252 FireWire interface board and compatible FireWire
hard drive enclosures. When I start acquiring data to an empty
hard disk, creating files on the fly as well as the actual file I/O
operations are both very quick. As the number of files on the
hard drive increases, it begins to take considerably longer to complete
the first write to a new binary file. After the first write,
subsequent writes of the same data size to that same file are very
fast. It is only the first write operation to a new file that
takes progressively longer. To clarify, it currently takes 1 to 2
milliseconds to complete the first binary write of a new file when the
hard drive is almost empty. After writing 32, 150 MByte files,
the first binary write to file 33 takes about 5 seconds! This
behavior is repeatable and continues to get worse as the number of
files increases. I am using the FAT32 file system, required for
the Real-Time controller, and 80GB laptop hard drives. The
system works flawlessly until asked to create a new file and write the
first set of binary data to that file. I am forced to buffer lots
of data from the DAQ boards while the system hangs at this point.
The requirements for this data acquisition system do not allow for a
single data file so I can not simply write to one large file.
Any help or suggestions as to why I am seeing this behavior would be
greatly appreciated.I am experiencing the same problem. Our program periodically monitors data and eventually save it for post-processing. While it's searching for suitable data, it creates one file for every channel (32 in total) and starts streaming data to these files. If it finds data is not suitable, it deletes the files and creates new ones.
On our lab, we tested the program on windows and then on RT and we did not find any problems.
Unfortunately when it was time to install the PXI on field (an electromechanic shovel on a copper mine) and test it, we've come to find that saving was taking to long and the program screwed up. Specifically when creating files (I.E. "New File" function). It could take 5 or more seconds to create a single file.
As you can see, field startup failed and we will have to modify our programs to workaround this problem and return next week to try again, with the additional time and cost involved. Not to talk about the bad image we are giving to our costumer.
I really like labview, but I am particularly upset beacuse of this problem. LV RT is supposed to run as if it was LV win32, with the obvious and expected differences, but a developer can not expect things like this to happen. I remember a few months ago I had another problem: on RT Time/Date function gives a wrong value as your program runs, when using timed loops. Can you expect something like that when evaluating your development platform? Fortunately, we found the problem before giving the system to our costumer and there was a relatively easy workaround. Unfortunately, now we had to hit the wall to find the problem.
On this particular problem I also found that it gets worse when there are more files on the directory. Create a new dir every N hours? I really think that's not a solution. I would not expect this answer from NI.
I would really appreciate someone from NI to give us a technical explanation about why this problem happens and not just "trial and error" "solutions".
By the way, we are using a PXI RT controller with the solid-state drive option.
Thank you.
Daniel R.
Message Edited by Daniel_Chile on 06-29-2006 03:05 PM -
Should OS/FileSystem caching be write-through?
I have a question. I use Ubuntu. Should I mount my filesystem (which holds BDB's content) with "-o sync" option? That is, should my file system cache be write-through?
I have this question because, if I turn on the logging feature in Berkeley DB but let the file system cache be write-back, I don't exactly know if the log is properly flushed to the disk or not.Thanks George. I agree that mature applications would be better off mounting their filesystem with "-o sync" option.
But here is a thing: I ran an example test case where I inserted 10 million key-value pairs with logging enabled, and saw that the average response time per insertion was 10 milli seconds, and I did the same experiment with logging disabled and saw that it too took 10 milliseconds per insertion on an average.
For the experiment with logging enabled, I create the environment with DB_INIT_LOG and DB_INIT_TXN flags but don't surround the insertion requests with txn_begin() and txn->commit(). I guess this way of doing insertions is called autocommit. I am hoping I am doing this experiment right.
Thanks for the pointers about set_flags() and DB_TXN_NOSYNC, I am going to look them up. -
Hi,
I'm trying to run the codes below with 200 threads using JMeter simulation (TCP connection). Here's my logic:
- clients connect to a server, server accepts and creates new thread
- the thread suppose to write the data into a file, but the file must be less than some size, in the case below is 200 bytes
- when the 200 bytes size limit is reached, the thread needs to move that file into another folder and then create a new file for the data to be written
- the writing data part is fine, but the moving data is not (many files aren't being moved)
- i should also mention, i declared the fname to be static variable (to be shared by threads)
So would anyone please help me like to give me advices if my codes below will work with the scenario above or if i need to approach the problem differently?
Thanks
BufferedReader in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
while((data = in.readLine()) != null) {
socket.setSoTimeout(5000);
// data should be in the form of this regex
data = (data.replaceAll("[^0-9A-Za-z,.\\-#: ]", "")).trim();
String [] result = data.split(",");
if (result.length == 19) {
if ((fname.trim()).equals("")) {
DateFormat dateFormat = new SimpleDateFormat("yyMMddHHmmssSSSS");
Date date = new Date();
fname = "log_"+dateFormat.format(date)+"_.txt";
else {
File outFile = new File("temp\\"+fname);
//System.out.println("outFile.length(): " + outFile.length());
// check if file is > filesize
if (outFile.length() > 200) {
fdata = fname;
DateFormat dateFormat = new SimpleDateFormat("yyMMddHHmmssSSSS");
Date date = new Date();
fname = "log_"+dateFormat.format(date)+"_.txt";
synchronized (fname) {
write(data);
move(fdata);
}Edited by: xpow on May 16, 2009 2:21 AMxpow wrote:
i think 'SSSS' is fine, because it extends the 'SSS' which is a date placeholder. The files that I try to write to are logs file. I actually having trouble to write it, that's why i need to include the 'SSSS'.If you want each thread to have its own file, 'SSSS' may not be good enough. Java is extremely fast at creating objects, and you could easily have 10 threads competing to write to the same temp file. As I said above, if you don't want this, add the Thread ID to your filename. Remember, just because Java time fields allow milliseconds doesn't mean they provide that accuracy. The clock on my home computer actually ticks over about every 15ms.
That's indeed one of the problem that I'm facing right now. I thought synchronization will take care of this problem.Only if all threads share the same object. As far as I can see, you are synchronizing on a filename created within the thread itself (I'm assuming your original fragment is part of the run() method) so the only synchronization you'd get would be from the I/O itself.
Yes, I am aware of this fact too, once the code is decent, it'll be moved to unix systemEven so, make sure you clean up your files after you're done with them. It seems that this setup has the potential to create thousands of files, and even a Unix filesystem has its limits.
My problem is that: there's a tcp server that listens to clients and receive that from it. The data needs to be inserted to the database. But with the volume of clients that connect to the server at the same time, I was thinking it's better to write it to temp file first (with filesize limitation), then to destination folder. There will be another process whose jobs is to parsing the files and move it into database. OK, so I presume each Thread is listening to output from a specific client, with a time limit for waiting (again, this isn't my forte, but I notice you have a 5 second timeout on the socket).
A few other problems I see with your code:
1. You've given each thread a limit of 200 bytes; on a decent size disk, the blocksize will be 4K (or even 8), which means that even if you write a file of 200 bytes, it will take up 4K on the disk.
2. You create a new File and FileWriter object every time you write a chunk of data, which creates a lot of work for the garbage collector. Create them only when you need to open a new file and simply use them until you want to close it and move it. To facilitate this, pass Files between your methods, not names. In fact, for the write method, you can pass the FileWriter.
3. The regex you use to filter your data includes "\\-#" which is not a valid range. It may well work, but it's always better to put '-' at the end of a metacharacter if it's not part of a range. Also, is a space (' ') the only valid space character you can receive? If, for example, the data could include tabs, you might be better off using '\s' (in the string you'll need "\\s").
A few other suggestions (I'm assuming that all data read from a particular socket before a timeout comes from a single client):
1. Make your size limit much bigger and a multiple of 1000 bytes (this should allow for any extra characters that may be added by the operating system). I'd suggest 4,000.
2. Split the process of reading and writing into two separate threads. Disk I/O is, almost certainly, by far the slowest part of this process and therefore the most likely to block.
One possibility for (2) is to append your validated data lines to a StringBuffer or StringBuilder and, when your size limit has been reached, copy the contents, pass the copy to a new writer thread, clear your buffer, and continue the process.
The advantage of this is that your reader thread will only ever be blocked on input, and each writer thread will have a chunk of data that it knows it can put in one file (and probably directly into the 'inbox' directory).
It still might not be a bad idea to have the "reader" thread create the filenames (don't forget to include the thread ID) and have it keep a "chunk" counter. The filename then becomes date/time plus reader-thread-ID plus chunk#, which ensures they will always be in sequence for your parser.
Your code might then be something like:
public class ReaderThread implements Runnable {
private static final CHUNK_SIZE = 1000;
private static final DateFormat dateFormat =
new SimpleDateFormat("yyMMddHHmmssSSSS");
private final String timeStamp =
dateFormat.format(new Date());
// Give your buffer enough extra capacity to complete a line.
// (this'll just make it run a bit quicker)
private Stringbuilder data_chunk = new StringBuilder(CHUNK_SIZE + 100);
private int chunk_counter = 0;
public void run() {
// validate your lines as before, and inside your
// 'if (result.length == 19)' block...
data_chunk.append(data);
if (data_chunk.length() >= CHUNK_SIZE)
handoff(data_chunk);
// remove all your filename stuff and the synchronized block
// this is the method that hands off your data "chunk" to the writer thread
private void handoff(StringBuilder chunk) {
StringBuilder chunkCopy = new StringBuilder(chunk);
String outfile = String.format("%s.%d.%7d",
timeStamp, this.getId(), ++chunk_count);
WriterThread w = new WriterThread(chunkCopy, outfile);
new Thread(w).start();
chunk.delete(0, chunk.length());
}This is just a possibility though, and there may be better ways to do it (such as communicating directly with your parser class via a Pipe).
I'll leave it to you to write the WriterThread if you do decide to try it this way.
HIH
Winston -
Write to measuremen​t file VI - every second, is that too fast ?
I use the write to measurement file VI to save 5 values + a comment to a file. The VI is in a loop. The VI adds new values to the same file every time it is called. In one of the tutorials it is said "VIs can be more efficient if you avoid opening and closing the same files frequently". I understand that; the write to measurement file VI does open and close the same file every time it is called.
I want the VI called every second to save new data. Tests I did with this rate did show problems. However, what is meant by frequently? Every second? Every millisecond? So my question is: is saving every 1 second, like I want, likely to cause problems, or isn't that time period hardly a problem? Or is there a better solution (although this VI gives me all possibilities I want, including saving a comment and saving to multiple files with nice filenames; I would really like to use this VI).
-- flying dutchman --
Attachments:
write every second.jpg 25 KBThe answer lies halfway between mine and Nukem's answers. Instead of storing all values in a shift register until the end, you simply store them, for say 1 hour, and append to the file. This also enables you to save some of your data in the event of a power failure or something.
Use a nested for loop, the inner loop will run for say, 50 or 100 iterations, storing values in a shift register array. Upon exiting this loop, the data is appended to your file. The outer for loop will run for how many iterations out need.
For some simple examples, lets say you need 10,000 loops and you want to save every 100 iterations. Have the inner loop run 100 times and the outer loop run 100 times. -
Nls_date_format in SQL Developer
Hello all,
the fields from data type DATE are shown in format 'DD.MM.YY' .
There ist a possibility to change this using the statement
alter session set nls_date_format='DD.MM.YYYY HH24:MI:SS'
I have to write this always when I establish a new database connection. I'd like to have this representation for all connections.
Is there a way to execute this statement at starting SQL-Developer or at connecting?
Thanks in advance for your answer!
Regards,
TzonkaThere will be a preference for this in 1.1, not yet released. For SQL Developer 1.0 there are various workarounds for this. A number of threads in this forum have already responded to this. Here is one:
Re: Alter session command on start-up
Regards
Sue -
Function to convert timestamps into milliseconds
Hi,
Does SAP have a function module which can read in a date/timestamp and output that time in milliseconds (a long integer starting from 1st January 1970)?
Many thanks in advance,
Petertry:
PARAMETERS:d2 TYPE sy-datum DEFAULT sy-datum,
d1 TYPE sy-datum default '19700101'.
data sec_day TYPE i . "ms per day
DATA : d TYPE i.
DATA result TYPE f.
DATA rp(16) type p decimals 0.
AT SELECTION-SCREEN ON d2.
IF d2 LE d1.
MESSAGE e001(00) WITH 'date must be greater than 1970/01/01'.
ENDIF.
START-OF-SELECTION.
sec_day = 24 * 60 * 60 * 1000.
d = d2 - d1.
rp = result = d * sec_day.
WRITE: / d2, d1, d, / result, rp.
hope that helps
Andreas
Edited by: Andreas Mann on Apr 10, 2008 3:37 PM -
How to get/store milliseconds (8i and below)
Hi all,
How to store or show the date-time value including milliseconds in Oracle 8i and below ?
Please share your experiances.
I know that in O9i with TIMESTAMP we can do that.
RegardsI don't find anything immediately obvious when searching for milliseconds-- most of the questions pertain to timestamps.
In 8i, the easiest way to get millisecond precision would be to write a Java stored procedure that used Java's Timestamp class to get milliseconds. Prior to 8i, you would have to write an external procedure that used something like C to determine timestamp.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Log file sequential read and RFS ping/write - among Top 5 event
I have situation here to discuss. In a 3-node RAC setup which is Logical standby DB; one node is showing high CPU utilization around 40~50%. The CPU utilization was less than 20% 10 days back but from 9th oldest day it jumped and consistently shows the double figure. I ran AWR reports on all three nodes and found one node with high CPU utilization and shows below tops events-
EVENT WAITS TIME(S) AVG WAIT(MS) %TOTAL CALL TIME WAIT CLASS
CPU time 5,802 34.9
RFS ping 15 5,118 33,671 30.8 Other
Log file sequential read 234,831 5,036 21 30.3 System I/O
Sql*Net more data from
client 24,171 1,087 45 6.5 Network
Db file sequential read 130,939 453 3 2.7 User I/O
Findings:-
On AWR report(file attached) for node= sipd207; we can see that "RFS PING" wait event takes 30% of the waits and "log file sequential read" wait event takes 30% of the waits that occurs in database.
Environment :- (Oracle- 10.2.0.4.0, O/S - AIX .3)
1)other node awr shows "log file sync" - is it due to oversized log buffer?
2)Network wait events can be reduced by tweaking SDU & TDU values based on MDU.
3) Why ARCH processes taking much to archives filled redo logs; is it issue with slow disk I/O?
Regards
WORKLOAD REPOSITORY report for<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<DB Name DB Id Instance Inst Num Release RAC Host
XXXPDB 4123595889 XXX2p2 2 10.2.0.4.0 YES sipd207
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 1053 04-Apr-11 18:00:02 59 7.4
End Snap: 1055 04-Apr-11 20:00:35 56 7.5
Elapsed: 120.55 (mins)
DB Time: 233.08 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 3,728M 3,728M Std Block Size: 8K
Shared Pool Size: 4,080M 4,080M Log Buffer: 14,332K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 245,392.33 10,042.66
Logical reads: 9,080.80 371.63
Block changes: 1,518.12 62.13
Physical reads: 7.50 0.31
Physical writes: 44.00 1.80
User calls: 36.44 1.49
Parses: 25.84 1.06
Hard parses: 0.59 0.02
Sorts: 12.06 0.49
Logons: 0.05 0.00
Executes: 295.91 12.11
Transactions: 24.43
% Blocks changed per Read: 16.72 Recursive Call %: 94.18
Rollback per transaction %: 4.15 Rows per Sort: 53.31
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 99.92 In-memory Sort %: 100.00
Library Hit %: 99.83 Soft Parse %: 97.71
Execute to Parse %: 91.27 Latch Hit %: 99.79
Parse CPU to Parse Elapsd %: 15.69 % Non-Parse CPU: 99.95
Shared Pool Statistics Begin End
Memory Usage %: 83.60 84.67
% SQL with executions>1: 97.49 97.19
% Memory for SQL w/exec>1: 97.10 96.67
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
CPU time 4,503 32.2
RFS ping 168 4,275 25449 30.6 Other
log file sequential read 183,537 4,173 23 29.8 System I/O
SQL*Net more data from client 21,371 1,009 47 7.2 Network
RFS write 25,438 343 13 2.5 System I/O
RAC Statistics DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
Begin End
Number of Instances: 3 3
Global Cache Load Profile
~~~~~~~~~~~~~~~~~~~~~~~~~ Per Second Per Transaction
Global Cache blocks received: 0.78 0.03
Global Cache blocks served: 1.18 0.05
GCS/GES messages received: 131.69 5.39
GCS/GES messages sent: 139.26 5.70
DBWR Fusion writes: 0.06 0.00
Estd Interconnect traffic (KB) 68.60
Global Cache Efficiency Percentages (Target local+remote 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer access - local cache %: 99.91
Buffer access - remote cache %: 0.01
Buffer access - disk %: 0.08
Global Cache and Enqueue Services - Workload Characteristics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg global enqueue get time (ms): 0.5
Avg global cache cr block receive time (ms): 0.9
Avg global cache current block receive time (ms): 1.0
Avg global cache cr block build time (ms): 0.0
Avg global cache cr block send time (ms): 0.1
Global cache log flushes for cr blocks served %: 2.9
Avg global cache cr block flush time (ms): 4.6
Avg global cache current block pin time (ms): 0.0
Avg global cache current block send time (ms): 0.1
Global cache log flushes for current blocks served %: 0.1
Avg global cache current block flush time (ms): 5.0
Global Cache and Enqueue Services - Messaging Statistics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg message sent queue time (ms): 0.1
Avg message sent queue time on ksxp (ms): 0.6
Avg message received queue time (ms): 0.0
Avg GCS message process time (ms): 0.0
Avg GES message process time (ms): 0.1
% of direct sent messages: 31.57
% of indirect sent messages: 5.17
% of flow controlled messages: 63.26
Time Model Statistics DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
-> Total time in database user-calls (DB Time): 13984.6s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
sql execute elapsed time 7,270.6 52.0
DB CPU 4,503.1 32.2
parse time elapsed 506.7 3.6
hard parse elapsed time 497.8 3.6
sequence load elapsed time 152.4 1.1
failed parse elapsed time 19.5 .1
repeated bind elapsed time 3.4 .0
PL/SQL execution elapsed time 0.7 .0
hard parse (sharing criteria) elapsed time 0.3 .0
connection management call elapsed time 0.3 .0
hard parse (bind mismatch) elapsed time 0.0 .0
DB time 13,984.6 N/A
background elapsed time 869.1 N/A
background cpu time 276.6 N/A
Wait Class DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
System I/O 529,934 .0 4,980 9 3.0
Other 582,349 37.4 4,611 8 3.3
Network 279,858 .0 1,009 4 1.6
User I/O 54,899 .0 317 6 0.3
Concurrency 136,907 .1 58 0 0.8
Cluster 60,300 .0 41 1 0.3
Commit 80 .0 10 130 0.0
Application 6,707 .0 3 0 0.0
Configuration 17,528 98.5 1 0 0.1
Wait Events DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
RFS ping 168 .0 4,275 25449 0.0
log file sequential read 183,537 .0 4,173 23 1.0
SQL*Net more data from clien 21,371 .0 1,009 47 0.1
RFS write 25,438 .0 343 13 0.1
db file sequential read 54,680 .0 316 6 0.3
DFS lock handle 97,149 .0 214 2 0.5
log file parallel write 104,808 .0 157 2 0.6
db file parallel write 143,905 .0 149 1 0.8
RFS random i/o 25,438 .0 86 3 0.1
RFS dispatch 25,610 .0 56 2 0.1
control file sequential read 39,309 .0 55 1 0.2
row cache lock 130,665 .0 47 0 0.7
gc current grant 2-way 35,498 .0 23 1 0.2
wait for scn ack 50,872 .0 20 0 0.3
enq: WL - contention 6,156 .0 14 2 0.0
gc cr grant 2-way 16,917 .0 11 1 0.1
log file sync 80 .0 10 130 0.0
Log archive I/O 3,986 .0 9 2 0.0
control file parallel write 3,493 .0 8 2 0.0
latch free 2,356 .0 6 2 0.0
ksxr poll remote instances 278,473 49.4 6 0 1.6
enq: XR - database force log 2,890 .0 4 1 0.0
enq: TX - index contention 325 .0 3 11 0.0
buffer busy waits 4,371 .0 3 1 0.0
gc current block 2-way 3,002 .0 3 1 0.0
LGWR wait for redo copy 9,601 .2 2 0 0.1
SQL*Net break/reset to clien 6,438 .0 2 0 0.0
latch: ges resource hash lis 23,223 .0 2 0 0.1
enq: WF - contention 32 6.3 2 62 0.0
enq: FB - contention 660 .0 2 2 0.0
enq: PS - contention 1,088 .0 2 1 0.0
library cache lock 869 .0 1 2 0.0
enq: CF - contention 671 .1 1 2 0.0
gc current grant busy 1,488 .0 1 1 0.0
gc current multi block reque 1,072 .0 1 1 0.0
reliable message 618 .0 1 2 0.0
CGS wait for IPC msg 62,402 100.0 1 0 0.4
gc current block 3-way 998 .0 1 1 0.0
name-service call wait 18 .0 1 57 0.0
cursor: pin S wait on X 78 100.0 1 11 0.0
os thread startup 16 .0 1 53 0.0
enq: RO - fast object reuse 193 .0 1 3 0.0
IPC send completion sync 652 99.2 1 1 0.0
local write wait 194 .0 1 3 0.0
gc cr block 2-way 534 .0 0 1 0.0
log file switch completion 17 .0 0 20 0.0
SQL*Net message to client 258,483 .0 0 0 1.5
undo segment extension 17,282 99.9 0 0 0.1
gc cr block 3-way 286 .7 0 1 0.0
enq: TM - contention 76 .0 0 4 0.0
PX Deq: reap credit 15,246 95.6 0 0 0.1
kksfbc child completion 5 100.0 0 49 0.0
enq: TT - contention 141 .0 0 2 0.0
enq: HW - contention 203 .0 0 1 0.0
RFS create 2 .0 0 115 0.0
rdbms ipc reply 339 .0 0 1 0.0
PX Deq Credit: send blkd 452 20.1 0 0 0.0
gcs log flush sync 128 32.8 0 2 0.0
latch: cache buffers chains 128 .0 0 1 0.0
library cache pin 441 .0 0 0 0.0
Wait Events DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)We only apply on one node in a cluster so I would expect that the node running SQL Apply would have much higher usage and waits. Is this what you are asking?
Larry -
How does VISA serial write work?
I am using LabView 6.0 and the VISA functions to run a distributed sensor network using RS-485 from a NT 4.0 machine.
In order to work properly I have to manually control the RTS line which isn't a problem using the property node.
Here's what I want to know.
I'm using a sequence structure:
I'm writing a command using VISA serial write
Then in the next frame I unassert the RTS.
Now in this setup does Labview pass the command to the serial port and then jump to the next frame in the sequence, or is there some kind of acknowledgment from the serial port that its done so the Write vi knows to finish, or am I running a the risk of cutting off my communication before I've finished or should I put in a short wait after the write to port
, or could this cause me to miss the reply?Robert:
With the default settings, VISA Write will not return until the data has been at least posted into the hardware FIFO. To be 100% sure that the data has been transmitted over the wire, you should do these 2 things:
1) Call VISA Flush I/O Buffer with a mask of 32. This will ensure the data has been at least posted into the hardware FIFO, regardless of which settings are being used.
2) Wait a tiny amount of time for the FIFO to be emptied. This is based on the number of bytes in the FIFO and the baud rate. For example, with a 64 byte UART FIFO and 9600 baud, you should wait 67 milliseconds.
Dan Mondrik
Senior Software Engineer, NI-VISA
National Instruments -
In AIR 3.x, a socket write() + flush() on a client will hang (and freeze the entire app) if the socket peer advertises a ZERO TCP Window, i.e. no space available in peer receiver's socket buffer.
AIR on Windows insists that the developer flush() the socket in order to write (any data at all). When the peer (receiver) advertises a 0 byte tcp window, the client flush() call can sometimes take 10 seconds (i.e. 10000 milliseconds), whereas normally it should take 10 to 50 milliseconds.
Additionally, AIR stayed in hung position. Since the socket had TCP KEEPALIVE option enabled (at the server), the socket stayed open. I let it stay hung overnight. The next day when I rebooted the server, the socket got closed and finally the AIR program "RETURNED FROM the sock.flush()" call. A timestamp before and after the call to flush() showed that the flush() call was in hung state for 56472475 milliseconds, i.e. 15.7 hours! After it returned from the flush() call the AIR app was responsive and seemed to work properly. Thus proving that it was indeed stuck (or blocked) on the flush() call for 15 hours trying to drain the data to a socket with zero tcp window condition.
A tcp zero window condition on 1 socket hanging an entire app sounds concerning.
Solution Suggestions:
(1) What is needed is for the OutputProgress event to include a field for 'bytes that can be written safely without blocking on the socket', i.e. 'space available in underlying platform socket buffer' which would account for the socket send buffer size.
(2) An alternative solution would be for AIR to provide a write-timeout setsockopt (or a writeTimeout() method on the socket), and return flush with an error (or EWOULDBLOCK), and call the OUTPUTPROG event only when there is enough space available.
If there are any other workarounds, please let me know.
Thank you.Question: Does Adobe AIR expose the getsockopt() and setsockopt() calls on a socket? It would be useful to apps to tune the io buffer sizes.
Additional Details:
RTT = 100+ milliseconds
TCP Window Scaling enabled
Secure Socket
Hard to reproduce with plain TCP socket, but occurs without fail with SecureSocket. Not knowing the underlying code base, am wondering if it is because the SSL encryption overhead (bytes) throws off the buffer available size compution in secure_sock.flush() .
Thanks.
Maybe you are looking for
-
As the title sugests there is a problem with the latest itunes update. I cant run anything. Have tried to repair, uninstal itunes. How do I get itunes to open without loseing all my libaries and playlists. Help please.
-
Dear SDN ! I am a newbie learning how to support CO in R/3. I hope you can help me. 1. Cost Element vs WBS element vs Internal Orders. My understanding is Cost element is a GL account, WBS element is related to a project and Internal orders is also
-
5800 xpress music.Clock is running slow.
The Time in my 5800 xpress music changes aautomatically.Its running slow.I had corrected the time 2 weeks back and now its 10 mins slow. Automatic Time update option is turned OFF.Pleaase help me...
-
(Mountain) Lion and its endless spinning beach ball.
These cats like beach balls a lot. I really hate to complain about any Apple product because this company really care about details and user experience. But this Lion thing is enough for me. Since I installed Lion on my computer, my work became a fru
-
Great britain E filling-add recipient type in distribution list
Hi, My client is about to file e filling but her name needs to be added in distribution list through t code sbwp,but the problem is that in the drop down value of recipient type for all the employees who can file Ereturns an addition value-'via inter