Reducing polycount of 3D models?

Hey guys! I'm using CS6 (x64, windows), and I'm extruding vectors by a few points to make 3D models of decorative walls for our modelers. They've constantly told me that there are far too many polygons in the models that Photoshop generates (mine are averaging between 20,000 and 30,000) and that even 3DS Max's "optimize" tool can't take it down to a really managable number (generally they only get them down to around 10,000).
Is there a way to reduce the number of polygons in models generated this way in Photoshop? Even the super simple ones that I extrude (with no curves at all) end up containing around 8,000 or 9,000. ANY sort of optimization tips, tricks, or tools would help a lot!
Thanks in advance for any advice

"Don't use Photoshop"? Really? I may be new to the world of 3D, but I'm not new to professionalism. I understand when I'm being talked down to, and if you have to make yourself feel better by putting us newbies in our places well...have fun with that I guess.
Does anyone else have any tips? Someone elsewhere suggested starting with a file in a really low resolution, and that seemed to cut around 1,000 polygons off! Someone else suggested stroking the vectors before extruding, but that didn't seem to do much.
I'd love any other brainstorms that the community might have!

Similar Messages

  • A realistic assessment of your experiences of hardware needed for the type of editing I do please.

    Introduction:
    I apologise for the length of this post but from experience of reading here, I'm working on the principle of the more I explain about myself now, the less anyone willing to help me will have to ask later.
    I have lurked around this forum on and off for a few years, read the various threads in the FAQ section, particularly PPBM5 and What PC to build thread and other related topics around what system to build.  I have found them very useful and in particular have enjoyed reading about Harm Millaard's experiences First Ideas for a new system.  For about about 12 months I've been delaying upgrading my PC but in Mr Millard's latest updates on his PPBM6 site he talks about new systems and  provides a link to Intel's time line which suggests they are in no rush to replace the i739xx series CPU chip - which has I believe amongst other things 2 cores disabled.  Normally bitter experience has taught me not to rush out and buy the latest technology but let others "test" it first and then benefit from reduced prices as that model is replaced.  However, it now seems like last years technology is going to remain as this years technology and probably the first 2 quarters at least of next year and, if anything, the price of the i739xx series is at best staying at it's existing launch price or even rising.  So it's time to take the plunge for me and upgrade.
    My current hardware for editing:
    I started with Premier 6.5 after I bought it as part of a bundle with a Matrox RTX 10 card - one of the most temperamental pieces of hardware I've had the misfortune to work with.  I later upgraded to Premiere Pro 1.5 and edited with that using a Pentium 4 2.6 (overclocked to 3.2), 3 hard drives (no raid) and 4G of memory.  The video footage used was avi recorded using a Canon MVX 30i and Panasonic NVGS27 and now I've added the Casio Exilim EX -FC100 (mpeg format) and a Panasonic HDC S90 (AVCHD).
    My PC coped with the editing I did with avi footage but couldn't handle AVCHD format and this convinced me to upgrade to Premiere Pro CS5.5.  At the same time I switched to editing on a Dell XPS M1530 (Centrino duo chip) - I upped the memory to 4GB, put Windows 7 64 bit home edition on and replaced the existing hard drive with a faster one.  In addition I use a SATA Quickport duo attached to my laptop via an eSATA card.  However, either the Quickport, eSATA card or XPS is extremely temperamental - I never see two external hard drives, 50% of the time see 1 external drive or none at all - when that happens I edit around it doing things I can with just the one internal drive - but this problem is not my question.
    The type of editing i do:
    I know people usually say around here not to try editing on laptops and believe me, I understand why, but using this setup I have been able to edit lots of videos  - see here for examples of the type of editing I currently do:
    http://www.youtube.com/user/PathfinderPro
    The equipment test videos place the biggest strain on the hardware when editing.  And, to do this editing I have to convert my AVCHD footage in to it's YouTube format before editing and even after I've done that it can be tediously slow to edit and playback even with premiere set to play at 1/4 normal quality.  To convert the AVCHD footage to the YouTube format I edit in has to be done over many nights.
    Now I am not a professional, I typically edit with up to 4 tracks of video with additional tracks for titles and my target audience is YouTube - which is why I can get away without editing in my prefered option of native AVCHD video format.  However, I'm tired of all the waiting, stuttering, and many many days and hours of converting videos into a format I can use so I'm looking to upgrade.  My problem is though I'm uncertain what path to take.  The PPBM results are dominated by overclocked chips, and whilst the motherboard make and model is listed, the hard disks used, graphic card makes and models and memory modules are not.  This is not a criticism of the PPMB tables (big thank you to Bill Gehrke & Harm Millaard for taking the time and effort to pull this much information together) but for me, I am not interested in being in the top 1000 in the world, nor overclocking like mad, and having had horror experiences of using matrox products and compatibility and stability issues with other hardware I'm more interested in compatability and practicality than speed when deciding what to build.  I've also read the threads about marvel controllers, dual and quad channel memory support, the pro's and cons of SSD or standard drives, raid setups, the heat problems with overclocking the newer ivy bridge chips and general build advice etc so I'm not coming here without having done some reading first.
    The type of system I'm thinking of:
    So far based on what I've read here, I've come to the conclusion - but I'm open to suggestion:
    - Chip - regrettably due to the cost and unlikely successor anytime soon - a 39xx (with appropriate cooler) because I want to edit in native AVCHD which seems to require the warrior type chip as opposed to the "economical" build regardless of what my target audience is and this suggests
    - X79 motherboard (which must have an old PCI slot such as the Asus Sabertooth and which has room for the cooler I'm considering).  As I will be carrying over my old terretec DMX 6 fire 24/96 soundcard - all my videos have their audio mastered in Audition using this card - best piece of advice I read was the audience will watch a bad video with good sound editing but not the other way round)
    - 4 hard drives plus additional hard drive for operating system using onboard raid controllers (not sure whether the operating system drive will be WD caviar black or SSD and can't justify cost of external raid controller for either my type of use or number of hard drives being used)
    - Video card - I can now buy a GTX 580 for less than the 670 - so not sure on the card especially based on Harm Millards observations that memory bandwith seems to be as important as CUDA cores
    - Case - I have an Akasa 62 case with room for 5 hard drives - I won't be exceeding that, and if I overclock it will only be by a little so is it really necessary to replace it for a Tower Case - although I would prefer a case with a front connection for esata so I may have to change the case regardless
    - Maximum memory 32G - so is it necessary to upgrade to windows 7 professional?
    - Power source - I'll work out when I've decided on my components.
    Help please:
    For me it's video source/dictated software chosen and hardware/audience(youtube) dictates format edited in.  As I don't intend to change my camcorders format (AVCHD or mpeg) in the next couple of years and I'm not interested in having the "fastest" system around what I'm really interested in learning is:
    what system setups people use now for doing similar editing to me
    what make/models of the component parts in your system work well together
    and if you do have a bottle neck in terms of hardware, where is it and what hardware would you change to  (not a dream model change, just a practical and realistic one)
    I have deliberately not given a budget for the changes I'm intending because budget should not be the deciding factor in determining what I "need" to upgrade to for the "type of editing I do" - especially bearing in mind I've got by so far (admitedly at a tortoise pace) with by todays standards a standard spec laptop.  Basically I don't want a Rolls Royce to go shopping at Wallmart but I'm tired of walking there and carrying everything back by hand!
    Thank you very much for any help / experiences people can share.

    Thank you both for your prompt and helpful replies.
    Mr Millaard, regarding your excellent article Planning and Building an NLE system, I have read it a couple of times now and it was your article which finally convinced me the time was now to upgrade but within it you said for good reason "Initial choice of CPU: i7-39xx with the intention to overclock to 4.6 - 4.8 GHz", hence my uncertainty about the CPU to use.  I have seen a video you posted here  - I think it was based on your cats (which I incidently enjoyed) so working on the editing done there (but not remembering if you mentioned what video format you used) and others who have mentioned many pro's for the i7-39xx I was leaning towards that - but I'm financially relieved at least - if the i3770 will do, although now with the possible recommendation by JEShort01 (sorry not sure of the forum etiquette for use of names) of the 2600K overclocked I'm a little bit back in the position of which is more suitable especially with the update to the i3770 being nearer than i7-39xx.  This still makes me lean towards the i7-39xx.
    Regarding the editing, the match play you can see on the channel is indeed 1 camera basic edits - multiple titles used to provide the score board.  However, the coaching videos use mulitple cameras - 3 to 4 sometimes (another reason for upgrading to CS5.5 for the multi cam editing support) and the equipment testing video can use 3 or 4 tracks layered on top of each other other with each track having opacity settings and multiple motion effects and titles with occasional keying video effects added.  For example this video at approx 2 mins 50 and 5 mins 10 seconds.
    http://www.youtube.com/watch?v=T1E5T7xo57c&list=PL577F7AB5E31FC5E9&index=13&feature=plpp_v ideo
    Monitor wise I use dual monitor setup.  My laptop screen and I link out to an LG M2394 D for widescreen and I sometimes use an old Neovo F-419 for 3 / 4 editing.  I won't be using more monitors than 2.  If the 580 drops a bit more I'll probably go for that - although I'll have to make sure it's size isn't an issue for the motherboard combo setup.  Interestingly there is a thread shown on the forum home page which discusses the 570 vs the 660ti and the opinion was go with the 660ti which surprised me a bit.
    Windows 7 professional it is then - I should have known that too - apologises for asking a question already asked.
    "Accepted, your correct criticism of the lacking hardware info on the PPBM5 website. That is the overriding reason that for the new site http://ppbm7.com/ we want to use Piriform Speccy .xml results to gather more, more accurate and more detailed hardware info."
    No criticism intended Mr Millaard - more an observation and I really look forward to that evolution with PPBM7.  I'm assuming the .xml results will use pre populated drop down lists people can select their hardware from - that way you can control and ensure consistent entries - downside being the work required by you to populate the lists in the first place and maintain them.
    Thanks again for your help but I'm still unsure a bit about the CPU and video card though.

  • IPhone 4 broken can I trade in?

    I have unlimited data through my provider I don't want to lose, but my iPhone 4 isn't working properly!

    As KiltedTim has told you, Apple does not have a trade-in program.
    You may be eligible for a reduced price replacement (same model,
    same color, same GB) under the Out of Warranty program - take
    your iPhone to the nearest Apple store to check.
    What is not working properly? There may be a remedy if you provide
    more details.

  • Material and vendor transfer to GTS sytem

    I am using GTS 8.0. We have a very peculiar issue. Newly cretaed vendor & material doesnot get transferred to GTS. Please provide the step by step configuration settings to transfer material and vendor master to GTS using rbdmidoc.eg. change pointers activation, message type, reduced message type, Distribution model, Idoc partner profile

    In Transaction BD60, make sure that the Function Module /SAPSLL/CREMAS_DISTRIBUTE_R3 is assigned for Message Type /SAPSLL/CREMAS_SLL.  And in Transaction BD52 for that Message Type, make sure that fieldname 'KEY' is assigned for table LFA1, object KRED.
    If those settings are already correct, then try to check whether the Change Pointer record is not being created, or if the CP record is not being sent to GTS.  To do that; for a new Vendor not received in GTS, check for a corresponding entries in tables BDCP / BDCPV.
    And of course, check for log entries in GTS (System Monitoring > Transfer Logs > Business Partners), and also check in SM58 for any RFC failures.
    Hope that helps.
    Regards,
    Dave

  • Why there no more upgrade for MacBook Pro 17in ?

    Why there no more upgrade for MacBook Pro 17in ?

    The story I heard was that there wasn't enough volume of sales for Apple to expend the resources to develop the line further.  When the 17" MBP line only accounted for 2-3% of the sales, it wasn't enough to offset the expense to engineer it, stock it, support it, etc.  What they chose to do was to offer the retina display 15" MBP that supports the same resolution (and possibly higher) and at the same time reduce the number of models they have to support.

  • I have request in the report level but the same is missing in the infocube

    Dear Experts,
    I have request in the report level but the same is missing in the compressed infocube level. What could be the cause? does the compressed infocube deletes the request ? if so, I could able to view other requests under infocube manage level.
    Kindly provide with enough information.
    Thanks.............

    Hi
    Compressing InfoCubes
    Use
    When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
    Features
    You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
    Compressing one request takes approx. 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
    If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
    If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
    If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
    Edited by: Allu on Dec 20, 2007 3:26 PM

  • Output_link and Page Navigation

    Is it possible to use output_link with JSP Page Navigation Model? At first sight, the answer is no. The value attribute is mandatory and rendered into href attribute. So, I have to point to the next page directly, bypassing the Page Navigation Definition in faces-config.xml . It is not good, because it broken the concept of JSP Navigation mechanism. In other works, the Page Flow created based on faces-config.xml navigation rules will be uncompleted potentially. In turn, it reduces the value of modeling (in GUI tools) drastically.
    If I am right, JSF misses the substitute for <h:command_link action="foo" immediate="false"> that works outside the form. I.e. something, that allows to involve Page Navigation, instead of avoiding it. (Something like <h:output_link action="foo">)

    I guess, redefining nested place for command_link may be a critical changes in the current JSF RI implementation. Probably, it might be easier to do the following:
    1. Add action attribute to the output_link
    2. Make both value and action attributes optional
    3. In case developer defines only value, the tag works as it works now (href=value)
    4. In case developer defines only action, href = current_page_url + '?actionId='+action
    in this way, the link works similar to command_link action="foo" immediate="true"
    5. If developer defines both attributes then href=value + '?actionId=' + action
    In this case, Navigation Model finds the defined page by URL and than makes a transfer according the defined action. BTW, such feature makes it possible to organize common starting points for Website menus which will be clearly presented in the application faces-config.

  • Problem on cube while reporting

    hello SDNs,
    i want to know that when we report on a cube wher does the data comes from E fact table or F fact table?
    and if i compress a cube what happens?
    where does the data comes from E or F fact table?
    If two requests were compressed and now the third request is in F table ,i want to report on this.which request should be in the reporting?
    thanks in advance
    sathish

    Hi,
    Compressing InfoCubes
    Before compression report read the data in F table.
    After the compression its read the data from E table as data is moved from F table to E table.
    After compression when you have queries fetching uncompressed data (data from both E and F) then query hits both the tables.
    Use
    When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
    Features
    You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
    Compressing one request takes approx. 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
    If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
    If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
    If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
    Compression is done to improve the performance. When data is loaded into the InfoCube, its done request wise.Each request ID is stored in the fact table in the packet dimension.This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.When you compress the request from the cube, the data is moved from F Fact Table to E Fact Table.Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0). i.e. all the data will be stored at the record level & no request will then be available. This also removes the SID's, so one less Join will be there while data fetching.
    The compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct before compressing.
    Note 407260 - FAQs: Compression of InfoCubes
    Summary
    Symptom
    This note gives some explanation for the compression of InfoCubes with ORACLE as db-platform.
    Compression on other db-platform might differ from this.
    Other terms
    InfoCubes, Compression, Aggregates, F-table, E-table, partitioning,
    ora-4030, ORACLE, Performance, Komprimierung
    Reason and Prerequisites
    Questions:
    1. What is the extent of compression we should expect from the portion we are loading?
    2. When the compression is stopped, will we have lost any data from the cube?
    3. What is the optimum size a chunk of data to be compressed?
    4. Does compression lock the entire fact table? even if only selected records are being compressed?
    5. Should compression run with the indexes on or off?
    6. What can I do if the performance of the compression is bad or becomes bad? Or what can I do if query performance after compression is bad?
    Solution
    In general:
    First of all you should check whether the P-index on the e-facttable exists. If this index is missing compression will be practically impossible. If this index does not exist, you can recreate this index by activating the cube again. Please check the activation log to see whether the creation was successful.
    There is one exception from this rule: If only one request is choosen for compression and this is the first request to be compressed for that cube, then the P-index is dropped and after the compression the index is recreated again automatically. This is done for performance reasons.
    Answers:
    1. The compression ratio is completely determined by the data you are loading. Compression does only mean that data-tuples which have the identical 'logical' key in the facttable (logical key includes all the dimension identities with the exception of the 'technical' package dimension) are combined into a single record.
    So for example if you are loading data on a daily basis but your cube does only contain the month as finest time characteristics you might get a compression ratio of 1/30.
    The other extreme; if every record you are loading is different from the records you have loaded before (e.g. your record contains a sequence number), then the compression ratio will be 1, which means that there is no compression at all. Nevertheless even in this case you should compress the data if you are using partitioning on the E-facttable because only for compressed data partitioning is used. Please see css-note 385163 for more details about partitioning.
    If you are absolutely sure, that there are no duplicates in the records you can consider the optimization which is described in the css-note 0375132.
    2. The data should never become inconsistent by running a compression. Even if you stop the process manually a consistent state should be reaches. But it depends on the phase in which the compression was when it was canceled whether the requests (or at least some of them) are compressed or whether the changes are rolled back.
    The compression of a single request can be diveded into 2 main phases.
    a) In the first phase the following actions are executed:
    Insert or update every row of the request, that should be compressed into the E-facttable
    Delete the entry for the corresponding request out of the package dimension of the cube
    Change the 'compr-dual'-flag in the table rsmdatastate
    Finally a COMMIT is is executed.
    b) In the second phase the remaining data in the F-facttable is deleted.
    This is either done by a 'DROP PARTITION' or by a 'DELETE'. As this data is not accessible in queries (the entry of package dimension is deleted) it does not matter if this phase is terminated.
    Concluding this:
    If the process is terminated while the compression of a request is in phase (a), the data is rolled back, but if the compression is terminated in phase (b) no rollback is executed. The only problem here is, that the f-facttable might contain unusable data. This data can be deleted with the function module RSCDS_DEL_OLD_REQUESTS. For running this function module you only have to enter the name of the infocube. If you want you can also specify the dimension id of the request you want to delete (if you know this ID); if no ID is specified the module deletes all the entries without a corresponding entry in the package-dimension.
    If you are compressing several requests in a single run and the process breaks during the compression of the request x all smaller requests are committed and so only the request x is handled as described above.
    3. The only size limitation for the compression is, that the complete rollback information of the compression of a single request must fit into the rollback-segments. For every record in the request which should be compressed either an update of the corresponding record in the E-facttable is executed or the record is newly inserted. As for the deletion normally a 'DROP PARTITION' is used the deletion is not critical for the rollback. As both operations are not so expensive (in terms of space) this should not be critical.
    Performance heavily dependent on the hardware. As a rule of the thumb you might expect that you can compress about 2 million rows per hour if the cube does not contain non-cumulative keyfigures and if it contains such keyfigures we would expect about 1 million rows.
    4. It is not allowed to run two compressions concurrently on the same cube. But for example loading into a cube on which a compression runs should be possible, if you don´t try to compress requests which are still in the phase of loading/updating data into the cube.
    5. Compression is forbidden if a selective deletion is running on this cube and compression is forbidden while a attribute/hierarchy change run is active.
    6. It is very important that either the 'P' or the primary index '0' on the E-facttable exists during the compression.
    Please verify the existence of this index with transaction DB02. Without one of these indexes the compression will not run!!
    If you are running queries parallel to the compression you have to leave the secondary indexes active.
    If you encounter the error ORA-4030 during the compression you should drop the secondary indexes on the e-facttable. This can be achieved by using transaction SE14. If you are using the tabstrip in the adminstrator workbench the secondary indexes on the f-facttable will be dropped, too. (If there are requests which are smaller than 10 percent of f-facttable then the indexes on the f-facttable should be active because then the reading of the requests can be speed up by using the secondary index on the package dimension.) After that you should start the compression again.
    Deleting the secondary indexes on the E facttable of an infocube that should be compressed may be useful (somemtimes even necessary) to prevent ressource shortages on the database. Since the secondary indexes are needed for reporting (not for compression) , queries may take much longer in the time when the secondary E table indexes are not there.
    If you want to delete the secondary indexes only on the E facttable, you should use the function RSDU_INFOCUBE_INDEXES_DROP (and specify the parameters I_INFOCUBE = ). If you want to rebuild the indexes use the function RSDU_INFOCUBE_INDEXES_REPAIR (same parameter as above).
    To check which indexes are there, you may use transaction RSRV and there select the elementary database check for the indexes of an infocube and its aggregates. That check is more informative than the lights on the performance tabstrip in the infocube maintenance.
    7. As already stated above it is absolutely necessary, that a concatenated index over all dimensions exits. This index normally has the suffix 'P'. Without this index a compression is not possible! If that index does not exist, the compression tries to build it. If that fails (forwhatever reason) the compression terminates.
    If you normally do not drop the secondary indexes during compression, then these indexes might degenerate after some compression-runs and therefore you should rebuild the indexes from time to time. Otherwise you might see performance degradation over time.
    As the distribution of data of the E-facttable and the F-facttable is changed by a compression, the query performance can be influenced significantly. Normally compression should lead to a better performance but you have to take care, that the statistics are up to date, so that the optimizer can choose an appropriate access path. This means, that after the first compression of a significant amount of data the E-facttable of the cube should be analyzed, because otherwise the optimizer still assumes, that this table is empty. Because of the same reason you should not analyze the F-facttable if all the requests are compressed because then again the optimizer assumes that the F-facttable is empty. Therefore you should analyze the F-facttable when a normal amount of uncompressed requests is in the cube.
    Header Data
    Release Status: Released for Customer
    Released on: 05-17-2005 09:30:44
    Priority: Recommendations/additional info
    Category: Consulting
    Primary Component: BW-BEX-OT-DBIF-CON Condensor
    Secondary Components: BW-SYS-DB-ORA BW ORACLE
    https://forums.sdn.sap.com/click.jspa?searchID=7281332&messageID=3423284
    https://forums.sdn.sap.com/click.jspa?searchID=7281332&messageID=3214444
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6466e07211d2acb80000e829fbfe/frameset.htm
    Thanks,
    JituK

  • Some questions about 1616, 100 and 101

    Hello, I am looking for a phone, and I've reduced to these 3 models, since I want a phone that principally:
    *The battery last much (1 week would be ok)
    *Don't have Internet access
    Ok, but there are some questions, very specific questions in fact, that I haven't resolved, and I was asking me if you coudl help me!
    *I've read that Nokia 1616 has a "memo voice recorder" so you could set a record for the ringtone.
    Although some people says thats not possible....its sure is possible?
    * Can the Nokia 101 record a call? I guess the others can't because no microSD.
    *And last, if its possible in any of these set up automatic change of profiles, based on time.
    I mean just, for example: default tone -> General ---- But when is between 11 AM to 16 PM, switch to Silence.
    I think those are 3 really useful "basic" features, but ironically any or very few "basic phones" incorporate them....
    Also, if you think other model of phone that could do better for my expectatives (at the same range of price) is ok.
    Thank you very much, and great forum!

    Some info? XD

  • Compression without partition.

    Hi,
    Would it be useful to compress an infocube even if there is no Fiscal partion on the cube.
    Thanks.

    Hi,
    Compressing InfoCubes
    Use
    When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
    Features
    You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
    Compressing one request takes approx. 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
    If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
    If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
    If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
    If you compress the Cube all the duplicate records will be summarized.
    Otherwise it will be summarized during Query runtime effecting the Query performance.
    Compression is done to improve the performance. When data is loaded into the InfoCube, its done request wise.Each request ID is stored in the fact table in the packet dimension.This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.When you compress the request from the cube, the data is moved from F Fact Table to E Fact Table.Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0). i.e. all the data will be stored at the record level & no request will then be available. This also removes the SID's, so one less Join will be there while data fetching.
    The compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct before compressing.
    Note 407260 - FAQs: Compression of InfoCubes
    Summary
    Symptom
    This note gives some explanation for the compression of InfoCubes with ORACLE as db-platform.
    Compression on other db-platform might differ from this.
    Other terms
    InfoCubes, Compression, Aggregates, F-table, E-table, partitioning,
    ora-4030, ORACLE, Performance, Komprimierung
    Reason and Prerequisites
    Questions:
    1. What is the extent of compression we should expect from the portion we are loading?
    2. When the compression is stopped, will we have lost any data from the cube?
    3. What is the optimum size a chunk of data to be compressed?
    4. Does compression lock the entire fact table? even if only selected records are being compressed?
    5. Should compression run with the indexes on or off?
    6. What can I do if the performance of the compression is bad or becomes bad? Or what can I do if query performance after compression is bad?
    Solution
    In general:
    First of all you should check whether the P-index on the e-facttable exists. If this index is missing compression will be practically impossible. If this index does not exist, you can recreate this index by activating the cube again. Please check the activation log to see whether the creation was successful.
    There is one exception from this rule: If only one request is choosen for compression and this is the first request to be compressed for that cube, then the P-index is dropped and after the compression the index is recreated again automatically. This is done for performance reasons.
    Answers:
    1. The compression ratio is completely determined by the data you are loading. Compression does only mean that data-tuples which have the identical 'logical' key in the facttable (logical key includes all the dimension identities with the exception of the 'technical' package dimension) are combined into a single record.
    So for example if you are loading data on a daily basis but your cube does only contain the month as finest time characteristics you might get a compression ratio of 1/30.
    The other extreme; if every record you are loading is different from the records you have loaded before (e.g. your record contains a sequence number), then the compression ratio will be 1, which means that there is no compression at all. Nevertheless even in this case you should compress the data if you are using partitioning on the E-facttable because only for compressed data partitioning is used. Please see css-note 385163 for more details about partitioning.
    If you are absolutely sure, that there are no duplicates in the records you can consider the optimization which is described in the css-note 0375132.
    2. The data should never become inconsistent by running a compression. Even if you stop the process manually a consistent state should be reaches. But it depends on the phase in which the compression was when it was canceled whether the requests (or at least some of them) are compressed or whether the changes are rolled back.
    The compression of a single request can be diveded into 2 main phases.
    a) In the first phase the following actions are executed:
    Insert or update every row of the request, that should be compressed into the E-facttable
    Delete the entry for the corresponding request out of the package dimension of the cube
    Change the 'compr-dual'-flag in the table rsmdatastate
    Finally a COMMIT is is executed.
    b) In the second phase the remaining data in the F-facttable is deleted.
    This is either done by a 'DROP PARTITION' or by a 'DELETE'. As this data is not accessible in queries (the entry of package dimension is deleted) it does not matter if this phase is terminated.
    Concluding this:
    If the process is terminated while the compression of a request is in phase (a), the data is rolled back, but if the compression is terminated in phase (b) no rollback is executed. The only problem here is, that the f-facttable might contain unusable data. This data can be deleted with the function module RSCDS_DEL_OLD_REQUESTS. For running this function module you only have to enter the name of the infocube. If you want you can also specify the dimension id of the request you want to delete (if you know this ID); if no ID is specified the module deletes all the entries without a corresponding entry in the package-dimension.
    If you are compressing several requests in a single run and the process breaks during the compression of the request x all smaller requests are committed and so only the request x is handled as described above.
    3. The only size limitation for the compression is, that the complete rollback information of the compression of a single request must fit into the rollback-segments. For every record in the request which should be compressed either an update of the corresponding record in the E-facttable is executed or the record is newly inserted. As for the deletion normally a 'DROP PARTITION' is used the deletion is not critical for the rollback. As both operations are not so expensive (in terms of space) this should not be critical.
    Performance heavily dependent on the hardware. As a rule of the thumb you might expect that you can compress about 2 million rows per hour if the cube does not contain non-cumulative keyfigures and if it contains such keyfigures we would expect about 1 million rows.
    4. It is not allowed to run two compressions concurrently on the same cube. But for example loading into a cube on which a compression runs should be possible, if you don´t try to compress requests which are still in the phase of loading/updating data into the cube.
    5. Compression is forbidden if a selective deletion is running on this cube and compression is forbidden while a attribute/hierarchy change run is active.
    6. It is very important that either the 'P' or the primary index '0' on the E-facttable exists during the compression.
    Please verify the existence of this index with transaction DB02. Without one of these indexes the compression will not run!!
    If you are running queries parallel to the compression you have to leave the secondary indexes active.
    If you encounter the error ORA-4030 during the compression you should drop the secondary indexes on the e-facttable. This can be achieved by using transaction SE14. If you are using the tabstrip in the adminstrator workbench the secondary indexes on the f-facttable will be dropped, too. (If there are requests which are smaller than 10 percent of f-facttable then the indexes on the f-facttable should be active because then the reading of the requests can be speed up by using the secondary index on the package dimension.) After that you should start the compression again.
    Deleting the secondary indexes on the E facttable of an infocube that should be compressed may be useful (somemtimes even necessary) to prevent ressource shortages on the database. Since the secondary indexes are needed for reporting (not for compression) , queries may take much longer in the time when the secondary E table indexes are not there.
    If you want to delete the secondary indexes only on the E facttable, you should use the function RSDU_INFOCUBE_INDEXES_DROP (and specify the parameters I_INFOCUBE = ). If you want to rebuild the indexes use the function RSDU_INFOCUBE_INDEXES_REPAIR (same parameter as above).
    To check which indexes are there, you may use transaction RSRV and there select the elementary database check for the indexes of an infocube and its aggregates. That check is more informative than the lights on the performance tabstrip in the infocube maintenance.
    7. As already stated above it is absolutely necessary, that a concatenated index over all dimensions exits. This index normally has the suffix 'P'. Without this index a compression is not possible! If that index does not exist, the compression tries to build it. If that fails (forwhatever reason) the compression terminates.
    If you normally do not drop the secondary indexes during compression, then these indexes might degenerate after some compression-runs and therefore you should rebuild the indexes from time to time. Otherwise you might see performance degradation over time.
    As the distribution of data of the E-facttable and the F-facttable is changed by a compression, the query performance can be influenced significantly. Normally compression should lead to a better performance but you have to take care, that the statistics are up to date, so that the optimizer can choose an appropriate access path. This means, that after the first compression of a significant amount of data the E-facttable of the cube should be analyzed, because otherwise the optimizer still assumes, that this table is empty. Because of the same reason you should not analyze the F-facttable if all the requests are compressed because then again the optimizer assumes that the F-facttable is empty. Therefore you should analyze the F-facttable when a normal amount of uncompressed requests is in the cube.
    Header Data
    Release Status: Released for Customer
    Released on: 05-17-2005 09:30:44
    Priority: Recommendations/additional info
    Category: Consulting
    Primary Component: BW-BEX-OT-DBIF-CON Condensor
    Secondary Components: BW-SYS-DB-ORA BW ORACLE
    http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6466e07211d2acb80000e829fbfe/frameset.htm
    Hope this helps.
    Thanks,
    JituK

  • How do I know if my MacBook Pro is refurbished?

    I got a MacBook Pro for Christmas (which I would be late 2011) but I have the specs of an early 2011 model. How do I know if I got scammed, or if the person who gave it to me bought it refurbished?

    My MacBook Pro already came with Lion, and according to the serial number, it is the early 2011 model. So I guess I can confirm that it is refurbished.
    That doesn't necessarily mean it was refurbished.  There is often remaining stock at retail stores of previous models (often at reduced prices when new models are released).  As has been mentioned, if it came in a box like the kind you would see in a store (picture of the MacBook on the box, etc), it is a new build.  If it came in a plain white box, then it would be refurbished.  Apple sells refurbished machines at good savings with full warranties.  Most people who get refurbished products from Apple are very happy with them.  About the only way you'll be able to tell, however, is with the packaging.

  • Loss of parts when "reducing model" in Acrobat 3D-Toolkit

    My problem:
    If I want to reduce my file-size, it is possible to reduce the model via the same-named button. If I want to finish the prodecure and click on the OK-button, the part of my model which I wanted to reduce is deleted.
    So what can I do?

    Just another experiment. I installed Reader 8 on another machine and tried to print the 3d files. Again the 3d area prints all black. All files have been saved with ECMA 1st edition so they will be compatible with Reader 7. Using ECMA 3rd edition does not allow the 3d data to be displayed in Reader 7. Since large clients tend to be a little slow on updating. It its vital that all 3d files be as backward compatible as possible. I'm going to try saving some files with ECMA 3rd edition and see how printing works with Reader 8.
    Thanks,
    Bert

  • I ahve a ipod touch of first generation model no.=MC540ll and i want to update my IOS of my ipod to 6.1.5 it requires 2 GB to start i want to know that after instillation the memory will be reduced or not?

    I ahve a ipod touch of first generation model no.=MC540ll and i want to update my IOS of my ipod to 6.1.5 it requires 2 GB to start i want to know that after instillation the memory will be reduced or not?

    A model MC540ll is a 4G iPod and can go to iOS 6.1.5.
    Updating via wifi (settings>General>Software Update requires about 2.4 GB of storage available. After the update all but maybe 50 MB will be returned to for use. iOS 6 takes up only a little more that iOS 5 but the excess storage space is needed to perform the update.
    If you update via iTunes yo do not need exces storage space

  • Adobe 3D Reviewer: No "reduce model" function like in 3D Toolkit?

    I'm using Adobe 3D Reviewer to prepare 3D models. I'm trying to reduce the file size of a gigantic 3D model. But it appears that Adobe 3D Reviewer does not have the "reduce model" function that was available in Acrobat 3D Toolkit to reduce mesh size.
    Any thoughts or tips?
    Thanks!
    K

    You are correct, there are no tools in 3D Reviewer to reduce mesh size except for the compression option available for both PRC and U3D when exporting to PDF.
    To export to PDF from 3D Reviewer and use the compression option, please follow these steps:
    1) open your model in 3D Reviewer
    2) go to File>Export
    3) choose PDF for the file type
    4) click on Options (lower left corner)
    5) ckick on Options (again)
    6) select PRC Tessellation for the Format
    7) check the box Compress Tessellation
    8) uncheck the other box
    9) click OK
    10) click OK (again)
    11) click Save
    Hope this helps

  • Laptop Screen size is reduced (Model: dv5 HP)

    Suddenly, screen size of my laptop has been reduced to 3/4 of horizontal size, 1/3 of screen is totally blank.
    The resolution in display settings is 1024 x 768. Aspect ration is full screen (no border).I didn't do any change in control panel>settings. Please advise how to resolve this issue.

    Hi Karpiron,
    What happened when you followed the link to the previous document ?
    I would be happy to assist if needed please respond with which Operating System you are running:
    Which Windows Operating System am I running?
     You can do a system restore. When performing a system restore please note remove any and all USB devices, and remove memory cards from the card reader slot. Disconnect all non-essential devices. If that does not help try to turn on the computer you start to press F11 repeatedly till the menu opens.
    Please let me know.
    Thanks.
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos, Thumbs Up" on the bottom to say “Thanks” for helping!

Maybe you are looking for

  • Ipad and mac not syncing with icloud

    I am unable to sync my photos and contacts from my ipad with my Mac through icloud. The ipad and mac both have the latest update.

  • Apache - mod_oc4j in 10.1.3 AS error connections

    We are experiencing a lot of [Thu Apr 17 12:33:23 2008] [warn] [client xxxxxx] oc4j_socket_recvfull timed out [Thu Apr 17 12:33:23 2008] [warn] [client xxxxxx] oc4j_socket_recvfull timed out [Thu Apr 17 12:33:23 2008] [error] [client xxxxxx] [ecid: 1

  • Ipad 1 crashes, if there are over four png files loaded in the same location.

    I added a HTML widget in iBooks Author. This includes four transparent pictures, which are loaded over each other. If there are more, then four of them, then ipad 1 just closes the programm. Any ideas?

  • Using a timer on mouselistener

    Is javax.swing.Timer compatible with mouselistener as well as action?

  • IPhone car support

    Hi, I've got a BMW X3 3.0d '06 (E83). I'm looking for an all-in-one solution that could accomplish the following goals for my iPhone3G: 1 connect the iPhone as a phone (handsfree); 2 connect the iPhone as an iPod (using BMW controls); 3 recharge the