What is the best practice dealing with process.getErrorStream()
I've been playing around creating Process objects with ProcessBuilder. I can use getErrorStream() and getOutputStream() to read the output from the process, but it seems I have to do this on another thread. If I simply call process.waitFor() and then try to read the streams that doesn't work. So I do something like final InputStream errorStream = process.getErrorStream();
final StringWriter errWriter = new StringWriter();
ExecutorService executorService = Executors.newCachedThreadPool();
executorService.execute(
new Runnable() {
public void run() {
try {
IOUtils.copy(errorStream, errWriter, "UTF-8");
} catch (IOException e) {
getLog().error(e.getMessage(), e);
int exitValue = process.waitFor();
getLog().info("exitValue = " + exitValue);
getLog().info("errString =\n" + errWriter); This works, but it seems rather inelegant somehow.
The basic problem is that the Runnable never completes on its own. Through experimentation, I believe that when the process is actually done, errorStream is never closed, or never gets an end-of-file. My current code works because when it goes to read errWriter it just reads what is currently in the buffer. However, if I wanted to clean things up and use executorService.submit() to submit a Callable and get back a Future, then a lot more code is needed because "IOUtils.copy(errorStream, errWriter, "UTF-8");" never terminates.
Am I misunderstanding something, or is process.getErrorStream() just a crappy API?
What do other people do when they want to get the error and output results from running a process?
Edited by: Eric Kolotyluk on Aug 16, 2012 5:26 PM
OK, I found a better solution.Future<String> errString = executorService.submit(
new Callable<String>() {
public String call() throws Exception {
StringWriter errWriter = new StringWriter();
IOUtil.copy(process.getErrorStream(), errWriter, "UTF-8");
return errWriter.toString();
int exitValue = process.waitFor();
getLog().info("exitValue = " + exitValue);
try {
getLog().info("errString =\n" + errString.get());
} catch (ExecutionException e) {
throw new MojoExecutionException("proxygen: ExecutionException");
} The problem I was having before seemed to be that the call to Apache's IOUtil.copy(errorStream, errWriter, "UTF-8"); was not working right, it did not seem to be terminating on EOS. But now it seems to be working fine, so I must have been chasing some other problem (or non-problem).
So, it does seem the best thing to do is read the error and output streams from the process on their own daemon threads, and then call process.waitFor(). The ExecutorService API makes this easy, and using a Callable to return a future value does the right thing. Also, Callable is a little nicer as the call method can throw an Exception, so my code does not need to worry about that (and the readability is better).
Thanks for helping to clarify my thoughts and finding a good solution :-)
Now, it would be really nice if the Process API had a method like process.getFutureErrorString() which does what my code does.
Cheers, Eric
Similar Messages
-
What is the best practice to display info of completed task in process flow
Hi all,
I'm starting to study BPM modeling with CE7.1 EHP1. Thanks to the tutorial and example on SDN site and I can easily build my own process in NWDS and deploy to server, start it, finish it.
I like the new runtime which can show a BPMN diagram to the processors. However, I can't find a way to let the follow up processor to review the task result completed in previous step. I'm more familiar with Guided Procedure, and know there is "Display Callable Object" which can used to show some info of a completed task when the processor/owner/admin/overseer click on a completed task. Where is the feature in BPM ? What is the best practice to show such task information in BPM environment.
For example, A multiple level approval process, the higher level approver need to know the comment written by the previous approver. Can he read this information from process flow ?
I think it is very important feature for a BPM platform. In Guided Procedure, such requirement can be done with Display Callable Object + View Permission, and you just need some coding for the UI. If BPM is superior to GP, I think there must be a way to achieve this, I just do not know how ?
Can anyone shed me some light on it ?Oliver,
Thanks for your quick reply.
Yes, Notes and Attachment CAN BE USED for the purpose. But I'm still looking for a more elegant solution.
With the solution of using Notes/Attachment, the processor need to give input at two places : the task UI and Note/Attach , with similar or same data. It is really annoying.
Is there any SAP BPM real-world deployment ? None of customer has the requirement ? -
What is the BEST practice - use BO or Java Object in process as webservice
Hi All,
I have my BP published as web service. I have defined My process input & output as BOs. My BP talks to DB through DAO layer(written in JAVA) which has Java objects. So I have BO as well as java Objects. Since I am collecting user input in BO, I have to assign individual values contained in BO to Java object's fields.
I want to reduce this extra headache & want to use either of BO or Java object.I want to know What is the best practice - use BO or Java object as process input. If it is BO,how I can reuse BOs in Java?
Thanks in advance.
Thanks,
Sujata P. GalindeHi Mark,
Thanks for your response. I also wanted to use java object only. When I use java object as process input argument..it is fine. But when I try to create Process web service, I am getting compilation error - "data type not supported".....
To get rid of this error, I tried to use heir (BO inheriting from java class). But while invoking process as web service, it is not asking for fields that are inherited from java class.
Then I created Business Object with a field of type java class... This also is not working. While sending request, it is giving an error that - field type for fields from java class not found.
Conclusion - not able to use java object as process(exposed as web service) input argument .
What is the Best & feasible way to accomplist the task - Process using DAO in Java & exposed as web service.
Thanks & Regards,
Sujata -
What is the best practice for using the Calendar control with the Dispatcher?
It seems as if the Dispatcher is restricting access to the Query Builder (/bin/querybuilder.json) as a best practice regarding security. However, the Calendar relies on this endpoint to build the events for the calendar. On Author / Publish this works fine but once we place the Dispatcher in front, the Calendar no longer works. We've noticed the same behavior on the Geometrixx site.
What is the best practice for using the Calendar control with Dispatcher?
Thanks in advance.
ScottNot sure what exactly you are asking but Muse handles the different orientations nicely without having to do anything.
Example: http://www.cariboowoodshop.com/wood-shop.html -
What is the best practice for creating master pages and styles with translated text?
I format translated text all the time for my company. I want to create a set of master pages and styles for each language and then import those styles into future translated documents. That way, the formatting can be done quickly and easily.
What are the best practices for doing this? As a company this has been tried in the past, but without success. I'd like to know what other people are doing in this regard.
Thank you!I create a master template that is usually void of content, with the exception I define as many of the paragraph styles I believe can/will be used with examples of their use in the body of the document--a style guide for that client. When beginning a new document for that client, I import those styles from the paragraph styles panel.
Exception to this is when in a rush I begin documentation first, then begin new work. Then in the new work, I still pull in those defined paragraph and or object styles via their panels into the new work.
There are times I need new styles. If they have broader applicability than a one-off instance or publication, then I open the style template for that client and import that style(s) from the publication containing the new style(s) and create example paragraphs and usage instructions.
Take care, Mike -
What is the best practice for changing view states?
I have a component with two Pie Charts that display
percentages at two specific dates (think start and end values).
But, I have three views: Start Value only, End Value only, or show
Both. I am using a ToggleButtonBar to control the display. What is
the best practice for changing this kind of view state? Right now
(since this code was inherited), the view states are changed in an
ActionScript function which sets the visible and includeInLayout
properties on each Pie Chart based on the selectedIndex of the
ToggleButtonBar, but, this just doesn't seem like the best way to
do this - not very dynamic. I'd like to be able to change the state
based on the name of the selectedItem, in case the order of the
ToggleButtons changes, and since I am storing the name of the
selectedItem for future reference.
Would using States be better? If so, what would be the best
way to implement this?
Thanks.I would stick with non-states, as I have always heard that
states are more for smaller components that need to change under
certain conditions, like a login screen that changes if the user
needs to register.
That said, if the UI of what you are dealing with is not
overly complex, and if it will not become overly complex, maybe
states is the way to go.
Looking at your code, I don't think you'll save much in terms
of lines of code. -
What is the Best practice for ceramic industry?
Dear All;
i would like to ask two questions:
1- which manufacturing category (process or discrete) fit ceramic industry?
2- what is the Best practice for ceramic industry?
please note from the below link
[https://websmp103.sap-ag.de/~form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700000409682008E ]
i recognized that ceramic industry is under category called building material which in turn under mill product and mining
but there is no best practices for building material or even mill product and only fabricated meta and mining best practices is available.
thanks in advanceHi,
I understand that you refer to production of ceramic tiles. The solution for PP was process, with these setps: raw materials preparation (glazes and frits), dry pressing (I don't know extrusion process), glazing, firing (single fire), sorting and packing. In Spain, usually are All-in-one solutions (R/3 o ECC solutions). Perhaps the production of decors have fast firing and additional processes.
In my opinion, the curiosity is in batch determination in SD, that you must determine in sales order because builders want that the order will be homogeneus in tone and caliber, and he/she can split the order in diferents deliveries. You must think that batch is tone (diferents colours in firing and so on) and in caliber.
I hope this helps you
Regards,
Eduardo -
What is the best practice for voicemail migration?
Hello Tech Gurus,
Am looking into a way to migrate our customer voicemail where their voicemail is on NME-CUE module. They want to migrate their voicemail's configurations, licenses and related (to SRE module) and I would like to know what is the best practice or guidelines that I can refer to.
Thank you very much!
Regards,
Alex.Hi Alex,
I was seeing the DOC which says that
Cisco supports transfer of CUE licenses, with some restrictions. Transfer is supported for CUE devices that are of the same type, for an RMA or in cases in which a license was wrongly installed. This process is not intended for transferring licenses from one generation to another (for example, from NM-CUE to NME-CUE, or from NME-CUE to SRE devices). Transferring a license is accomplished using a process called rehosting. The rehosting process transfers a license from one UDI to another by revoking the license from the source device and installing in a new device
http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/unity_exp/rel7_1/Licensing/CUELicensing_book/csa_overview_CUE.html#wp1101175
You can still speak to licensing team along with show license udi from SRE module along with old licenses details from NME-CUE fro rehosting.
regds,
aman -
What are the best practices to migrate VPN users for Inter forest mgration?
What are the best practices to migrate VPN users for Inter forest mgration?
It depends on a various factors. There is no "generic" solution or best practice recommendation. Which migration tool are you planning to use?
Quest (QMM) has a VPN migration solution/tool.
ADMT - you can develop your own service based solution if required. I believe it was mentioned in my blog post.
Santhosh Sivarajan | Houston, TX | www.sivarajan.com
ITIL,MCITP,MCTS,MCSE (W2K3/W2K/NT4),MCSA(W2K3/W2K/MSG),Network+,CCNA
Windows Server 2012 Book - Migrating from 2008 to Windows Server 2012
Blogs: Blogs
Twitter: Twitter
LinkedIn: LinkedIn
Facebook: Facebook
Microsoft Virtual Academy:
Microsoft Virtual Academy
This posting is provided AS IS with no warranties, and confers no rights. -
What is the best practice in securing deployed source files
hi guys,
Just yesterday, I developed a simple image cropper using ajax
and flash. After compiling the package, I notice the
package/installer delivers the same exact source files as in
developed to the installed folder.
This doesnt concern me much at first, but coming to think of
it. This question keeps coming out of my head.
"What is the best practice in securing deployed source
files?"
How do we secure application installed source files from
being tampered. Especially, when it comes to tampering of the
source files after it's been installed. E.g. modifying spraydata.js
files for example can be done easily with an editor.Hi,
You could compute a SHA or MD5 hash of your source files on
first run and save these hashes to EncryptedLocalStore.
On startup, recompute and verify. (This, of course, fails to
address when the main app's swf / swc / html itself is
decompiled) -
My final data table contains a two key columns unique key constraint. I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
(not unique) in the daily capture table). I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). Currently, what I do is to select * into a #temp table from the join
of daily capture and final data tables on these two key columns. Then I delete the rows in the daily capture table which match the #temp table. Then I insert the remaining rows from daily capture into the final data table.
Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table? How would this look?
What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
Rich PPlease follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for:
https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
>> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key. You need to fix this error. What ETL tool do you use?
>> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
MERGE statement; Google it. And do not use temp tables.
--CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
in Sets / Trees and Hierarchies in SQL -
What is the best practice for full browser video to achieve the highest quality?
I'd like to get your thoughts on the best way to deliver full-browser (scale to the size of the browser window) video. I'm skilled in the creation of the content but learning to make the most out of Flash CS5 and would love to hear what you would suggest.
Most of the tutorials I can find on full browser/scalable video are for earlier versions of Flash; what is the best practice today? Best resolution/format for the video?
If there is an Adobe guide to this I'm happy to eat humble pie if someone can redirect me to it; I'm using CS5 Production Premium.
I like the full screen video effect they have on the "Sounds of pertussis" web-site; this is exactly what I'm trying to create but I'm not sure what is the best way to approach it - any hints/tips you can offer would be great?
Thanks in advance!Use the little squares over your video to mask the quality. Sounds of Pertussis is not full screen video, but rather full stage. Which is easier to work with since all the controls and other assets stay on screen. You set up your html file to allow full screen. Then bring in your video (netstream or flvPlayback component) and scale that to the full size of your stage (since in this case it's basically the background) . I made a quickie demo here. (The video is from a cheapo SD consumer camera, so pretty poor quality to start.)
In AS3 is would look something like
import flash.display.Loader;
import flash.net.URLRequest;
import flash.display.Bitmap;
import flash.display.BitmapData;
import flash.ui.Mouse;
import flash.events.Event;
import flash.events.MouseEvent;
import flash.display.StageDisplayState;
stage.align = StageAlign.TOP_LEFT;
stage.scaleMode = StageScaleMode.NO_SCALE;
// determine current stage size
var sw:int = int(stage.stageWidth);
var sh:int = int(stage.stageHeight);
// load video
var nc:NetConnection = new NetConnection();
nc.connect(null);
var ns:NetStream = new NetStream(nc);
var vid:Video = new Video(656, 480); // size off video
this.addChildAt(vid, 0);
vid.attachNetStream(ns);
//path to your video_file
ns.play("content/GS.f4v");
var netClient:Object = new Object();
ns.client = netClient;
// add listener for resizing of the stage so we can scale our assets
stage.addEventListener(Event.RESIZE, resizeHandler);
stage.dispatchEvent(new Event(Event.RESIZE));
function resizeHandler(e:Event = null):void
// determine current stage size
var sw:int = stage.stageWidth;
var sh:int = stage.stageHeight;
// scale video size depending on stage size
vid.width = sw;
vid.height = sh;
// Don't scale video smaller than certain size
if (vid.height < 480)
vid.height = 480;
if (vid.width < 656)
vid.width = 656;
// choose the smaller scale property (x or y) and match the other to it so the size is proportional;
(vid.scaleX > vid.scaleY) ? vid.scaleY = vid.scaleX : vid.scaleX = vid.scaleY;
// add event listener for full screen button
fullScreenStage_mc.buttonMode = true;
fullScreenStage_mc.mouseChildren = false;
fullScreenStage_mc.addEventListener(MouseEvent.CLICK, goFullStage, false, 0, true);
function goFullStage(event:MouseEvent):void
//vid.fullScreenTakeOver = false; // keeps flvPlayer component from becoming full screen if you use it instead
if (stage.displayState == StageDisplayState.NORMAL)
stage.displayState=StageDisplayState.FULL_SCREEN;
else
stage.displayState=StageDisplayState.NORMAL; -
Database Log File becomes very big, What's the best practice to handle it?
The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
Should I Shrink the Database?
I know increase hard disk is need for long term .
Thanks in advance.Hi Finke,
Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
Follow these steps to get transactional file back in normal shape:
1.) Take a transactional backup.
2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
>
Finke Xie wrote:
> Should I Shrink the Database? .
"NEVER SHRINK DATA FILES", shrink only log file
3.) Schedule log backups every 15 minutes.
Thanks
Mush -
What is the best practice for creating primary key on fact table?
what is the best practice for primary key on fact table?
1. Using composite key
2. Create a surrogate key
3. No primary key
In document, i can only find "From a modeling standpoint, the primary key of the fact table is usually a composite key that is made up of all of its foreign keys."
http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/logical.htm#i1006423
I also found a relevant thread states that primary key on fact table is necessary.
Primary Key on Fact Table.
But, if no business requires the uniqueness of the records and there is no materilized view, do we still need primary key? is there any other bad affect if there is no primary key on fact table? and any benifits from not creating primary key?Well, natural combination of dimensions connected to the fact would be a natural primary key and it would be composite.
Having an artificial PK might simplify things a bit.
Having no PK leads to a major mess. Fact should represent a business transaction, or some general event. If you're loading data you want to be able to identify the records that are processed. Also without PK if you forget to make an unique key the access to this fact table will be slow. Plus, having no PK will mean that if you want to used different tools, like Data Modeller in Jbuilder or OWB insert / update functionality it won't function, since there's no PK. Defining a PK for every table is a good practice. Not defining PK is asking for a load of problems, from performance to functionality and data quality.
Edited by: Cortanamo on 16.12.2010 07:12 -
What are the best practices to create surrounding borders?
Good day everyone,
I was wondering what is the best practices to create a look in my iOS app like the one below? How are they accomplishing the creation of the borders, is there a tool in Xcode IB to do that?
Thank you in advanceOnce again thanks for your input, however I am still not clear how you have accomplish the rounded corners, you do not mention that in your reply.
I did some research on my end and I was able to accomplish what I want with a UIView using the code below in an outlet:
redView.layer.cornerRadius = 10;
redView.layer.borderColor = [UIColor greenColor].CGColor;
redView.layer.borderWidth = 5;
However, I cannot do the same for the UITableView or UITableView cell.
Thanks
Maybe you are looking for
-
When I log into my facebook account, it loads up fine. However, if I try to open any facebook pages into a new tab, it fails to load. It doesn't seem to affect the original tab in which it was loaded. It also doesn't seem to affect Facebook app pages
-
Best Practice for a Print Server
What is the best practice for having a print server serving over 25 printers 10 of which are colour lasers and the rest black and white lasers. Hardware At the moment we have one server 2Ghz Dual G5 with 4GB Ram and xserve RAID. The server is also ou
-
I have a podcast feed going to iTunes but the iTunes listing has not updated since April. Many new podcasts have been uploaded since then. Any one know how to get that listing up to date?
-
Using Microsoft Remote Desktop on MacBook Air using OS Yosemite 10.10.2
Hi, I have recently acquired a MacBook and have been trying to use Microsoft Remote Desktop software to open a cloud based server called Universal anywhere. It has been working fine but now keeps crashing at various points, like when I type in the s
-
SAPGUI for Java Support on Mac 10.4
Hi, I am using Mac OS 10.4 Tiger version with SAP GUI 6.4 rev6. when I use SAPGUI 6.4 for eCATT Scripts, I am not able to append or insert new rows in the Parameter list. Although, all the commands/pushbuttons are working as they should. On pressing