text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Timeline Nov 6, 2009: - 11:57 PM Ticket #242 (Go back to having different scripts version from postgis lib version) reopened by - Paul, Thanks -- I took a quick look at this script and it looks … - 10:31 PM Ticket #24 (Enhancement: containsVertex function) closed by - wontfix: Two year old, no movement, retiring until someone decides to love this … - 5:47 PM Ticket #77 (make check - is there anyway for us to delete postgis_reg db when load ...) closed by - fixed: Database now dropped when postgis function load fails. r4764 - 5:47 PM Changeset [4764] by - Drop database when function loads fail in regression tests. (#77) - 5:31 PM Ticket #282 (Change ~= Operator to be a BBOX Only Test) closed by - fixed: Change committed and regression tests updated to r4763. See #289 for … - 5:31 PM Changeset [4763] by - Make ~= be a bounding box only operator and upgrade ST_Equals() and … - 5:29 PM Ticket #289 (ST_OrderingEquals returning true for mis-ordered multipoints) created by - This query seems wrong on the face: […] Returns true but should … - 4:34 PM Ticket #252 (Text autocasting behavior of geography is breaking geometry functionality) closed by - fixed: I've done this for all the signatures I think might realistically be … - 4:33 PM Changeset [4762] by - Add text wrappers to functions we commonly expect people to call with … - 4:13 PM Ticket #271 (ST_Covers returns wrong result for vertex sharing case) closed by - fixed: Fixed at r4761 - 4:12 PM Changeset [4761] by - Fix for point-on-vertex case of st_covers (#271) - 2:55 PM Changeset [4760] by - Utility to read svn revision numbers from SQL scripts. - 2:49 PM Changeset [4759] by - finalize Xlink support (GML SF-2 fully compliant). Fix typo on … - 2:41 PM Ticket #242 (Go back to having different scripts version from postgis lib version) closed by - wontfix: Okay lets punt this one. It seems my idea of not having to change … - 1:46 PM Ticket #284 (Add geography.sql.in.c as a dependency to postgis.sql) closed by - fixed: Committed at r4758 - 1:45 PM Changeset [4758] by - Make geography.sql part of the standard postgis.sql build. - 12:46 PM Ticket #230 (Put in costs in _ST and possibly other functions) closed by - fixed: Committed to trunk at r4757 - 12:46 PM Changeset [4757] by - Add costs to CPU intensive C functions and update doco to ensure 8.3 … - 10:35 AM Ticket #253 (Gist causes wrong result from ~=) closed by - fixed - 10:35 AM Changeset [4756] by - Restore re-check for PgSQL < 8.4 also on the ~= operator only. - 9:30 AM Changeset [4755] by - Update documentation for those functions affected by RFC3. They are … - 9:19 AM Ticket #288 (ST_AsBinary should return SQL/MM WKB) created by - Moving to SQL/MM, the AsBinary? conversion should now support higher … - 9:18 AM Ticket #287 (ST_AsText should return SQL/MM WKT) created by - Moving to SQL/MM, ST_AsText should no longer ignore the 3rd and 4th … - 9:15 AM Ticket #286 (Change "no SRID" SRID to 0) created by - This breaking change will put is into synch with ISO SQL/MM. - 8:50 AM Ticket #195 (Implement RFC3) closed by - fixed: Sorry, this ticket is closed, I'm happy to work on a fresh one on … - 7:25 AM Changeset [4754] by - A bugfix and removal of stupid shortcut that only caused very … - 2:53 AM Ticket #195 (Implement RFC3) reopened by - Paul -- hmm don't know what to say, except I'm really concerned about … - 2:19 AM Ticket #285 (FAQ When should I use geography over geometry) created by - Someone asked me this and it sounds like a question we will be getting … Nov 5, 2009: - 12:25 PM Ticket #157 (ST_GeometryType output doesn't correctly identify curved geometries) closed by - fixed: Patched in trunk at r4752 and in 1.4 branch at r4753 - 12:25 PM Changeset [4753] by - Fix for #157, ST_GeometryType output doesn't correctly identify curved … - 12:20 PM Changeset [4752] by - Fix for #157, ST_GeometryType output doesn't correctly identify curved … - 11:59 AM Ticket #185 (Enhancement to ST_PointOnSurface to support CIRCULARSTRINGS and MULTICURVES) closed by - wontfix: This seems worthy of closing. We don't know what the behavior is and … - 11:29 AM Ticket #283 (regress_lrs regression test fails) closed by - fixed: Fixed at r4751 - 11:29 AM Changeset [4751] by - Fix for new LRS regression (#283) - 11:05 AM Ticket #284 (Add geography.sql.in.c as a dependency to postgis.sql) created by - The geography types should be part of the default install. - 11:05 AM Ticket #195 (Implement RFC3) closed by - fixed: Committed to trunk at r4750, watch out, function signature changes! - 11:04 AM Changeset [4750] by - Implement RFC3 (#195) - 3:48 AM Ticket #283 (regress_lrs regression test fails) created by - Running regression tests against PostGIS from current trunk (r4749) … Nov 4, 2009: - 8:55 PM Changeset [4749] by - Some initializations and a null pointer avoidance test (#273) - 4:58 PM Changeset [4748] by - Add ST_Intersection() and ST_Intersects() for geography. - 4:46 PM Ticket #100 (postgis_restore.pl createdb option wrong usage) closed by - fixed: Closed out in 1.4 branch at r4747. People who need special connection … - 4:45 PM Changeset [4747] by - "Fix" for #100. - 4:44 PM Ticket #228 (postgis_restore.pl createdb parameters are used also for createlang -> ...) closed by - fixed: Closed out at r4746. - 4:43 PM Changeset [4746] by - Remove createdb_opt lines from psql and createlang calls. (#228) - 4:37 PM Ticket #113 (ST_Locate_Along_Measure -- just returns null for unsupported instead ...) closed by - fixed: Fix applied to trunk at r4745. - 4:37 PM Changeset [4745] by - Make non-M attempts to run LRS functions error out instead of return … - 3:51 PM Changeset [4744] by - Fix for #273? Some unitialized variables may have been causing … - 3:14 PM Ticket #264 (CUnit geography suite failures) closed by - fixed: I believe I have fixed this last night. - 3:04 PM Ticket #279 (hausdorff regression test fails) closed by - fixed: Fixed in trunk at r4743. - 3:03 PM Changeset [4743] by - Fix hausdorf crasher (#279) - 2:45 PM Ticket #267 (Use of GUC to determine default (sphere/spheroid) calculation for geography) closed by - wontfix: I think with function signatures we are fine as things are, I'm … - 1:19 PM Changeset [4742] by - revert wrong commit (r4741) on wktparse.lex file - 1:10 PM Changeset [4741] by - Allow a double to not have digit after dot (related to #175). Update … - 12:37 PM Ticket #175 ('SRID=26915;POINT(482020.9789850 4984378.)' fails to parse) closed by - fixed - 12:36 PM Changeset [4740] by - Fix for #175. - 12:35 PM Changeset [4739] by - Fix for #175, numbers with a terminal decimal won't parse. - 12:22 PM Changeset [4738] by - 8.4 fix for recheck issue around ~= (#253) - 12:19 PM Ticket #225 (postgis_uninstall script will destroy data) closed by - wontfix: I don't. I think uninstall means uninstall. - 11:03 AM Ticket #223 (Breaking change ST_Extent returns a box3d_extent object) closed by - worksforme: I'm not seeing this, and I am seeing the appropriate cast in the … - 10:59 AM Changeset [4737] by - amend ST_Length to include use_spheroid proto and amend examples to … - 10:50 AM Ticket #282 (Change ~= Operator to be a BBOX Only Test) created by - Right now the assumed behavior of ~= is an index-assisted exact … - 10:47 AM Changeset [4736] by - Allow ~= operator to recheck, per #253. - 3:57 AM Changeset [4735] by - type correction in ST_BuildArea output. Add additional proto to … - 1:53 AM Changeset [4734] by - Give priority to gml namespace attribute if any. Apply a fix on ring … Nov 3, 2009: - 7:27 PM Changeset [4733] by - Change ST_Area(geog) to defaul to spheroid calculation. - 4:13 PM Changeset [4732] by - Remove unit test failure cases in 32-bit architectures. Now have to … - 2:26 PM Changeset [4731] by - Initial support of Xlink. Add related units tests. Few cleaning - 2:24 PM Changeset [4730] by - Add xpath headers support for libxml2 - 1:24 PM Changeset [4729] by - File headers and property setting. - 1:16 PM Changeset [4728] by - Add in handlers to avoid sheroid area cases we currently cannot handle. - 1:13 PM Changeset [4727] by - Slight change in ST_Area wording. - 7:32 AM Changeset [4726] by - amend distance proto and example -- now we default to spheroid - 5:36 AM Changeset [4725] by - Add namespace support. Add pointProperty and pointRep support. Fix pos … - 2:03 AM Changeset [4724] by - get rid of extra para tag - 1:47 AM Changeset [4723] by - more typo fixing - 1:33 AM Changeset [4722] by - fix typo Nov 2, 2009: - 9:19 PM Changeset [4721] by - Document ST_Buffer for geography and caveats - 6:58 PM Changeset [4720] by - Re-enable other geodetic unit tests and remove Java code block. - 4:36 PM Changeset [4719] by - First cut of ST_Area(geography) on spheroid. Currently not default, … - 9:13 AM Changeset [4718] by - first copy of the branch - 4:05 AM Changeset [4717] by - minor corrections to ST_distance_sphere/spheroid descriptions - 3:36 AM Ticket #280 (in_gml regression test fails) closed by - worksforme: Yes, you are right. This problem is caused by the Hausdorff test. … Nov 1, 2009: - 2:31 PM Changeset [4716] by - amend doc for st_distance_sphere, st_distance_spheroid to reflect … - 9:58 AM Changeset [4715] by - Here comes the new subgeometry handling. It's a little dirty commit … - 9:55 AM Changeset [4714] by - Renaming of function st_closestp_a2b to ST_ClosestPoint, and adding a … Oct 31, 2009: - 4:10 PM Ticket #184 (ST_AsGeoJSON Precision doesn't match WKT) closed by - worksforme: I'm resolving as werks4me, unless I hear a use case that requires a fix. Oct 30, 2009: - 10:05 PM Changeset [4713] by - Make distance_spher(oid) functions a little more type safe. - 9:53 PM Changeset [4712] by - Update distance_sphere and distance_spheroid to back onto new geodetic … - 5:35 PM Ticket #229 (shp2pgsql ability to transform from one spatial ref to another) reopened by - Well ogr2ogr doesn't support geography yet and I still prefer the dbf … - 5:24 PM Ticket #229 (shp2pgsql ability to transform from one spatial ref to another) closed by - wontfix: I think we'll always have more important things to do. For people who … - 5:11 PM Ticket #265 (Geometry to Geography cast enhancement auto transform) closed by - fixed: Restriction to unknown or 4326 committed at r4711. - 5:10 PM Changeset [4711] by - Tighten up geometry->geography case (#265) - 5:01 PM Changeset [4710] by - Add ST_Length() implementation on spheroid and rationalize the … - 4:22 PM UsersWikiPostgreSQLPostGIS edited by - (diff) - 4:22 PM UsersWikiPostgreSQLPostGIS edited by - (diff) - 1:45 PM Changeset [4709] by - Add in spheroid calculations for ST_Distance and ST_DWithin. - 12:35 PM UsersWikiPostgreSQLPostGIS edited by - (diff) - 12:32 PM UsersWikiPostgreSQLPostGIS edited by - (diff) - 12:11 PM UsersWikiPostgreSQLPostGIS edited by - (diff) - 12:08 PM Ticket #281 (Postgis 1.4 is incompatible with PostgreSQL 8.1) closed by - fixed: Yes we dropped that version late in the game. Corrected … - 12:06 PM UsersWikiPostgreSQLPostGIS edited by - (diff) - 12:00 PM Changeset [4708] by - Add link to new compatibility matrix - 11:59 AM Changeset [4707] by - correct postgresql minimum version - 11:25 AM UsersWikiPostgreSQLPostGIS edited by - (diff) - 11:23 AM UsersWikiMain edited by - (diff) - 10:37 AM UsersWikiPostgreSQLPostGIS edited by - (diff) - 10:35 AM UsersWikiPostgreSQLPostGIS edited by - (diff) - 10:35 AM UsersWikiPostgreSQLPostGIS edited by - (diff) - 10:34 AM UsersWikiPostgreSQLPostGIS edited by - (diff) - 10:31 AM UsersWikiPostgreSQLPostGIS edited by - (diff) - 10:25 AM UsersWikiPostgreSQLPostGIS edited by - (diff) - 10:24 AM UsersWikiPostgreSQLPostGIS edited by - (diff) - 10:23 AM UsersWikiPostgreSQLPostGIS edited by - (diff) - 10:23 AM UsersWikiPostgreSQLPostGIS edited by - (diff) - 10:21 AM DevWikiMain edited by - (diff) - 10:17 AM UsersWikiPostgreSQLPostGIS edited by - (diff) - 10:16 AM UsersWikiPostgreSQLPostGIS edited by - (diff) - 10:13 AM UsersWikiPostgreSQLPostGIS created by - - 10:12 AM UsersWikiMain edited by - (diff) - 8:09 AM Ticket #281 (Postgis 1.4 is incompatible with PostgreSQL 8.1) created by - In contradiction with what's written here : … - 5:59 AM Ticket #280 (in_gml regression test fails) created by - Running regression tests against PostGIS from current trunk (r4706) … - 5:47 AM Ticket #279 (hausdorff regression test fails) created by - Running regression tests against PostGIS from current trunk (r4706) … - 3:53 AM Changeset [4706] by - [wktraster] Fixed bug with mismatched width vs height in rt_raster_new … Oct 29, 2009: - 10:18 PM DevWikiGardenTest edited by - (diff) - 9:58 PM DevWikiGardenTest edited by - (diff) - 9:05 PM DevWikiGardenTest edited by - (diff) - 9:03 PM DevWikiGardenTest edited by - (diff) - 1:31 PM Changeset [4705] by - Change dimension to srsDimension (GML 3.1.1) - 1:22 PM Ticket #276 (ST_AsGML - GML not compliant) closed by - fixed: srsDimension fixed in 1.4 branch (r4703) and trunk (r4704) - 1:21 PM Changeset [4704] by - Change attribute dimension into srsDimension (GML 3.1.1), cf #276 - 1:20 PM Changeset [4703] by - Change attribute dimension into srsDimension (GML 3.1.1), cf #276 - 1:08 PM Ticket #276 (ST_AsGML - GML not compliant) reopened by - In fact the attribute name in gml:pos or gmlposList is not dimension … - 12:53 PM Changeset [4702] by - Minor changes for numerical stability. Remove logging. - 12:41 PM Changeset [4701] by - Increase precision of minor axis constant. - 12:24 PM Changeset [4700] by - Spheroid distance calculation between points added. - 11:42 AM Changeset [4699] by - Add mixed GML srs support. Add ability to deal with lat/lon issue in … - 11:40 AM Changeset [4698] by - Expose transform_point, make_project and GetProj4StringSPI. Creation … - 7:21 AM Changeset [4697] by - Update unit test result, related to error message change (r4662 in … - 7:09 AM Ticket #276 (ST_AsGML - GML not compliant) closed by - fixed: Fixed in 1.4 branch as r4695, and in trunk as r4696 - 7:08 AM Changeset [4696] by - Add attribute dimension in gml:pos and gml:posList. Fix … - 7:07 AM Changeset [4695] by - Add attribute dimension in gml:pos and gml:posList. Fix … Oct 28, 2009: - 4:05 PM Changeset [4694] by - Note why the penalty function was changed. - 4:02 PM Changeset [4693] by - Remove overly clever penalty calculation and improve index structure a lot! - 11:38 AM Changeset [4692] by - Fill in actual error condition - 11:20 AM Changeset [4691] by - Fix error in picksplit routine, perhaps will fix balance problem. - 5:13 AM Changeset [4690] by - slight attribution update - 5:05 AM Ticket #278 (Update credits section of documentation) closed by - fixed: Okay I have committed a first draft at r4687, feel free to change it … - 4:56 AM Changeset [4689] by - minor update to release notes (copying content from branch 1.3 not in … - 4:55 AM Changeset [4688] by - correct links to svn and bug tracker. minor change to release notes - 4:47 AM Changeset [4687] by - switch pretty tag back to credits -- already linked in reference.xml - 4:40 AM Changeset [4686] by - update credits to include breakout of PSC and bump up people with … - 4:10 AM Ticket #278 (Update credits section of documentation) created by - They are a bit out of date and don't reflect our new PSC structure. - 4:08 AM Ticket #275 (documentation still points to Google bug-tracker) closed by - fixed: fixed at r4683, r4684, r4685 (for both 1.4 branch and 1.5 trunk) … - 4:06 AM Changeset [4685] by - copy 1.3.6 notes from trunk. Also correct links to svn and postgis … - 4:05 AM Changeset [4684] by - copy release notes text from branch 1.4 which is strangely more up to date. - 3:58 AM Changeset [4683] by - correct links to postgis bug tracker and subversion repository. Also … Oct 27, 2009: - 2:50 PM Ticket #277 (ST_As* crashes on huge number) closed by - fixed: Fix commited in 1.4 as r4681 and in trunk as r4682 Define a … - 2:39 PM Changeset [4682] by - Fix huge number overflow in export functions, cf #277 - 2:37 PM Changeset [4681] by - Fix huge number overflow in export functions, cf #277 - 12:24 PM Ticket #277 (ST_As* crashes on huge number) created by - Following query crash PostgreSQL server […] Impacted: ST_AsGML, … - 12:18 PM Ticket #276 (ST_AsGML - GML not compliant) created by - Following problem are: - Attribute dimension is not present in … - 12:42 AM Ticket #275 (documentation still points to Google bug-tracker) created by - All documentation still points to Google bug-tracker in chapter … Oct 26, 2009: - 5:38 PM UsersWikiExamplesInterpolateWithOffset edited by - (diff) - 5:26 PM UsersWikiGenerateHexagonalGrid edited by - Cleaning up my ancient code. (diff) - 5:21 PM UsersWikiMain edited by - (diff) - 5:20 PM UsersWikiExamplesInterpolateWithOffset created by - New example - 8:08 AM Ticket #274 (Explain Interior, Boundary and Exterior) created by - For completeness, it would be good to give short description of these … Oct 24, 2009: - 9:37 AM Changeset [4680] by - Add multi data coordinates support. Add unit test case data_1 - 9:35 AM Changeset [4679] by - Add ptarray_merge function Oct 23, 2009: - 8:14 PM UsersWikiplpgsqlfunctionsDistance edited by - (diff) - 5:58 PM Ticket #203 (shp2pgsql does not correctly encode control characters) closed by - fixed: Applied to 1.3 branch at r4678. - 5:58 PM Changeset [4678] by - Patch to handle tab and CR better, from dfuhriman. (#203) - 5:36 PM Ticket #270 (estimated_extent on view (after db restore)) closed by - wontfix: Please post questions to the postgis-users list. When/if you locate a … - 4:16 PM Changeset [4677] by - Update personal information. - 9:01 AM Changeset [4676] by - typo in example - 8:51 AM Changeset [4675] by - put in availability note for ST_GeomFromGML, link back from ST_AsGML, … - 6:54 AM Changeset [4674] by - Just some cleanup - 6:40 AM Changeset [4673] by - Adding a new function with suspect name. ST_ClosestP_in_A2B - 6:26 AM Changeset [4672] by - Preliminary documentation for ST_GeomFromGML and logic to support gml … Oct 22, 2009: - 12:06 PM Ticket #273 (ST_GeomFromGML crashes in Windows compiled in MingW) created by - I managed to get the libxml2 to compile on one of my boxes, but this … - 11:17 AM UsersWikiWinCompile edited by - (diff) - 7:08 AM Changeset [4671] by - Use ptarray_isclosed3d to check if 3D rings are closed also on Z. … - 7:06 AM Changeset [4670] by - Add ptarray_isclosed3d function Oct 21, 2009: Oct 20, 2009: - 8:30 AM Changeset [4669] by - fix typo in libxml deactivated notice - 6:07 AM Changeset [4668] by - Add HAVE_LIBXML2 - 5:54 AM Changeset [4667] by - Add initial version of GeomFromGML function, and units tests cases. - 5:51 AM Changeset [4666] by - Add libxml2 support (needed by GeomFromGML) Oct 19, 2009: - 5:53 AM Changeset [4665] by - update to include ST_Length for geography Oct 18, 2009: - 10:05 PM Changeset [4664] by - Add _ST_BestSRID(Geography) utility function to support … - 2:15 PM Changeset [4663] by - Add in support for magic srid numbers that will always be available … Oct 17, 2009: - 9:19 PM Ticket #266 (ST_Length for geography) closed by - fixed: Committed at r4662 - 9:19 PM Changeset [4662] by - ST_Length(geography) per #266 Oct 16, 2009: - 11:42 PM Ticket #272 (ST_LineCrossingDirection should be negatively symmetric) created by - I propose that to minimize on confusion It should hold true that: … - 11:38 PM Ticket #241 (ST_LineCrossingDirection Server Crash (Segfault)) closed by - fixed: I'm closing this ticket out. I think the server memory issue is … - 4:30 PM Changeset [4661] by - Muck with index logging code. - 9:33 AM Changeset [4660] by - Fix the geography <column> && <column> selectivity code. Now the … - 9:31 AM Changeset [4659] by - Commit a first-hack attempt at a script to test the geography join … - 9:23 AM Changeset [4658] by - Change "Mixed Geometry Types" message into a warning rather than an … - 8:26 AM Ticket #271 (ST_Covers returns wrong result for vertex sharing case) created by - […] Should return true but returns false. - 7:14 AM WKTRaster edited by - (diff) - 6:33 AM Changeset [4657] by - revise to test && against table and also put in some floating points … - 6:03 AM Ticket #269 (Geography->Geometry cast should be explicit) closed by - fixed: I tried it I like it. committed at r4656 - 6:01 AM Changeset [4656] by - #269 get rid of geography -> geometry implicit to make it an explicit cast - 2:37 AM Changeset [4655] by - Tell what the default is for -N in help output and README file Oct 15, 2009: - 10:50 AM Changeset [4654] by - Update the TYPMOD_SET_* macros in the same way as for the FLAGS_SET_* … - 10:45 AM Changeset [4653] by - Add (slightly hacked) version of geography selectivity test script to … - 10:44 AM Changeset [4652] by - Fix test_estimation.pl script so it doesn't require oids - no-one uses … - 8:35 AM Changeset [4651] by - Alter the FLAGS_SET_* macros so that they actually update the … - 8:31 AM Ticket #270 (estimated_extent on view (after db restore)) created by - Hi, on my primary install I have made a view to filter rows from a … - 7:48 AM Ticket #268 (Simple intersection query between geography column and POLYGON ...) closed by - fixed: Righto, it was a missing initialisation bug which is why it worked for … - 7:48 AM Changeset [4650] by - Fix for column intersection geography queries sometimes returning … - 7:27 AM Ticket #269 (Geography->Geometry cast should be explicit) reopened by - Ah okay that's what you are complaining about. I'm in full agreement … - 6:58 AM Ticket #269 (Geography->Geometry cast should be explicit) closed by - invalid: Mark, I consider this bug invalid and not a bug. I don't personally … - 6:51 AM Ticket #269 (Geography->Geometry cast should be explicit) created by - Changed subject: We really shouldn't be allowing people to use … - 6:46 AM Ticket #268 (Simple intersection query between geography column and POLYGON ...) created by - The following basic intersection query, in my view, should just work: … - 4:42 AM Ticket #267 (Use of GUC to determine default (sphere/spheroid) calculation for geography) created by - I think we had discussed the use of defining GUCs at least for some … - 4:37 AM Ticket #266 (ST_Length for geography) created by - Paul, Just in case its not in your todo and if its not too much … Oct 14, 2009: - 10:56 AM UsersWikiPostgisOnUbuntu edited by - (diff) - 9:57 AM Changeset [4649] by - Re-enable ANALYZE hook, now that it doesn't crash upon loading Paul's … - 9:22 AM Changeset [4648] by - Don't use the default (integer) version of abs() during floating point … Oct 13, 2009: - 12:50 PM Changeset [4647] by - Much better fix for NaN area problem. - 12:39 PM Changeset [4646] by - HAck fix for NaN areas. - 3:40 AM Changeset [4645] by - [wktraster] Improved version of genraster.py utility Oct 12, 2009: - 6:27 PM Ticket #265 (Geometry to Geography cast enhancement auto transform) created by - I'm finding myself doing a lot of this: SELECT … - 6:06 PM UsersWikiplpgsqlfunctionsDistance edited by - (diff) - 6:06 PM UsersWikiplpgsqlfunctions edited by - (diff) - 6:02 PM UsersWikiplpgsqlfunctionsDistance edited by - (diff) - 7:34 AM Changeset [4644] by - [wktraster] Manual version tick - dumb subversion plays games. - 7:10 AM Changeset [4643] by - [wktraster] Tick revision number reported (temporarily) by … - 7:06 AM Changeset [4642] by - Testing commit from new machine. Tick pgsql version in Makefile. Oct 11, 2009: - 7:53 PM Changeset [4641] by - Test commit. Nothing to see here. - 9:08 AM Ticket #264 (CUnit geography suite failures) created by - I'm seeing more of a pattern between GCC than OS as to how this fails. … Oct 10, 2009: - 7:03 PM Ticket #263 (Casting from geography to geometry Huh?) closed by - fixed: Great sleuthing. Fixed at r4640 - 7:03 PM Changeset [4640] by - Don't copy bboxes from lwgeom to gserialized when working with … - 9:59 AM Changeset [4639] by - update to include ST_Covers geography - 9:54 AM Ticket #263 (Casting from geography to geometry Huh?) created by - I can't figure out what is wrong here but this suggests something is … - 9:43 AM Ticket #262 (Possible bug in Geography ST_Covers) created by - I would expect the answer to my UTM to agree with Geography answer. … - 8:43 AM Changeset [4638] by - update ST_Area with geography examples Oct 9, 2009: - 8:18 PM Ticket #257 (Ability to cast from geography to geometry) closed by - fixed: Committed at r4637 - 8:18 PM Changeset [4637] by - Add geometry(geography) case per #257 - 5:08 PM Changeset [4636] by - Fix ST_Area(geography) calculation to be more... correct. - 12:23 PM Changeset [4635] by - Add implementation for ST_Covers(geography, geography) in … - 11:07 AM Changeset [4634] by - Fix incorrect use of flags macros - 9:51 AM Ticket #260 (Geography inconsistent distance behavior from geometry and geography ...) closed by - fixed: Good god, is there anything you don't test :) fixed at r4633 - 9:51 AM Changeset [4633] by - One more fix for #260. - 9:07 AM Ticket #261 (Distance calc still giving spurious mismatched dimensionality errors) closed by - fixed: Probably not the only place this bug lives, but fixed in this case, at … - 9:07 AM Changeset [4632] by - Fix for #261 (spurious dimension difference errors) - 3:39 AM Changeset [4631] by - Put in proto for ST_Area(geography). Still need to put in example but … Oct 8, 2009: - 9:16 PM Changeset [4630] by - Add ST_PointOutside() function for testing purposes. - 12:40 PM Changeset [4629] by - Make geographic point initialization slightly more efficient (avoid … - 11:59 AM Changeset [4628] by - Make error messages slightly less opaque - 11:41 AM Changeset [4627] by - Comment out analyze argument in geometry type creation -- it is … - 10:10 AM Changeset [4626] by - Change radius figure to common average. - 10:04 AM Changeset [4625] by - Reformat SQL lines with tabs - 7:16 AM DevWikiGardenTest edited by - (diff) - 7:06 AM Ticket #261 (Distance calc still giving spurious mismatched dimensionality errors) created by - This may be the same issue as #260, but these two geometries have the … - 5:16 AM Ticket #179 (ST_MakeLine and ST_MakeLine_Garry crash server with null arrays) closed by - fixed: Seems okay on my 8.4 (1.4, 1.5) - 4:53 AM DevWikiGardenTest edited by - (diff) - 4:43 AM Changeset [4624] by - revise readme to include link to instructions for garden test - 4:40 AM Changeset [4623] by - Revise to have function list past in as arg to xsltproc - 4:38 AM DevWikiGardenTest edited by - (diff) - 4:07 AM DevWikiGardenTest edited by - (diff) - 3:29 AM Changeset [4622] by - Commit first attempt at working geography index selectivity - the … Oct 7, 2009: - 10:35 PM Changeset [4621] by - ST_Area(geography) implementation and SQL bindings. - 2:02 PM WKTRaster/SpecificationWorking01 edited by - (diff) - 2:00 PM WKTRaster/SpecificationWorking02 edited by - (diff) - 1:59 PM WKTRaster/SpecificationWorking03 edited by - (diff) - 1:57 PM WKTRaster edited by - (diff) - 7:26 AM Changeset [4620] by - Make the calculation of gboxes a little simpler in the db level code. - 5:17 AM Changeset [4619] by - Fix #179: ST_MakeLine and ST_MakeLine_Garry crash server with null … - 5:16 AM Changeset [4618] by - Fix #179: ST_MakeLine and ST_MakeLine_Garry crash server with null … - 5:03 AM Ticket #260 (Geography inconsistent distance behavior from geometry and geography ...) created by - Example with new geography - both the ST_Dwithin and ST_Distance fail … - 4:52 AM Changeset [4617] by - Add table with multiple nulls to garden of geometries. Evidentally -- … - 4:26 AM Ticket #210 (segmentation faults in lwgeom_geos.c:pgis_union_geometry_array) closed by - fixed: Okay this one seems to work fine on my 1.4.1svn install too. … Note: See TracTimeline for information about the timeline view.
http://trac.osgeo.org/postgis/timeline?from=2009-11-06T16%3A13%3A07-0800&precision=second
CC-MAIN-2016-36
refinedweb
4,577
51.11
explanation to your code - Java Beginners explanation to your code sir, I am very happy that you have...)?And i have imported many classes.but in your code you imported a very few classes.sir i need your brief explanation on this code and also your guidance.this I need your help - Java Beginners I need your help What is the purpose of garbage collection in Java, and when is it used? Hi check out this url : I need your help - Java Beginners the file name is ApplicationDelivery.java Your program must follow proper Java...I need your help For this one I need to create delivery class for a delivery service .however, this class should contain (delivery number which This Question was UnComplete - Java Beginners your need.Create a sourceFile and goahead. /** * Program to take an single... to delete the temporary file). **/ public void deleteFile(String fileName java - Java Beginners (); } } /** * Deletes the object file. */ public void deleteFile() { File Controlling your program statements (break, continue, return) in Java programming language. Selection The if statement: To start with controlling statements in Java, lets have.... That is the if statement in Java is a test of any boolean expression. The statement following java beginners - Java Beginners java beginners the patteren u received is not the actual patteren... to sent the different patterns so that your will get the right one...(.) and send your pattern. Thanks java beginners - Java Beginners java beginners let me know the java code for the followign... args[]) { System.out.println("try these........"); System.out.println("your... these........"); System.out.println("your series is....."); for(i=1;i<=3;i Java basics - Java Beginners :// for more code and examples on Java...literals in java program Why we use literals in java program? ...; literals are represented directly in your code without requiring computation. As shown Java programming - Java Beginners to : Thanks...Java programming hi sir, my question are as follows... hope u can solve my java programming problem.. Write a program to develop a class to hold java programming problem - Java Beginners .. programming problem Hello..could you please tell me how can I java - file delete - Java Beginners java - file delete I will try to delete file from particular folder. My code is executed correctly in other project and main function. In same... class DeleteFile { public static void main(String[] args) { String small java project - Java Beginners small java project i've just started using java at work and i need to get my self up to speed with it, can you give me a small java for beginners project to work on. your concern will be highly appreciated Java Basic Tutorial for Beginners Java Basic Tutorial for Beginners - Learn how to setup-development environment and write Java programs This tutorial will get you started with the Java...) More tutorials for beginners Getting Started with Java Learn Java projects for beginners In this tutorial you will be briefed about the Java projects for beginners... will explore your java ideas and your thoughts. It is advised that students... about the nature of your projects being beginners. You should also add Creating your own package in java and subpackage click on the following links Create Your Own Package Create java-io - Java Beginners Thanks...java-io Hi Deepak; down core java io using class in myn... your code do some changes in your code : int r,c; int arr[][]; intialize r java - Java Beginners visit these following pages for your answers : I hope this will help you...java Q 1- write a program in java to generate Armstrong numbers Java Syntax - Java Beginners /java/beginners/array_list_demo.shtml Thanks...Java Syntax Hi! I need a bit of help on this... Can anyone tell... Lists unless you provide your own method java.util Implementations of List Java Objects - Java Beginners Java Objects Hi I have the following question, What method in Java is used to destroy your objects. Thanks in advance Ask Programming Questions and Discuss your Problems Ask Programming Questions and Discuss your Problems Dear Users, Analyzing your plenty of problems and your love for Java and Java and Java related fields, Roseindia Technologies java - Java Beginners ; Advanced->Environment Variables and create your JAVA_HOME, CLASSPATH etc. there. The value of JAVA_HOME variable is the root folder location of your JAVA... variable is the "bin" folder location in your JAVA Installation. Eg: C:\Program Files java programming - Java Beginners java programming asking for the java code for solving mathematical equation with two unknown .thnx ahead.. Hi Friend, Please clarify your question. Which mathematical equations you want to solve? Thanks java program - Java Beginners java program 1.java program to Develop a multi-threaded GUI application of your choice. 2.program to Develop a template for linked-list class along with its methods in Java java - Java Beginners java Develop a multi-threaded GUI application of your choice. Hi Friend, Please visit the following link: Hope that it will be helpful Java - Java Beginners OutOfMemoryError error is your Java program, you should consider increasing the JVM space...Java 1.How do you declare the starting point of a Java application? 2. What happened if your program terminates with an OutOfMemoryError a java code that takes input regular expression and a string and produces as output all lines of the string that contains a substring denoted by regular expression Hi Friend, Please clarify your problem java - Java Beginners to your employer, peers and customers that you are proficient in Java...java What is SCJP and write its benefits. How it is useful for JAVA programmers. Hi Friend, Sun Certified Java Programmer java - Java Beginners to your employer, peers and customers that you are proficient in Java...java What is SCJP and write its benefits. How it is useful for JAVA programmers. Hi Friend, Sun Certified Java Programmer java Frames - Java Beginners java Frames Hi friend, Thanks for your reply. Can i use frames in socket program. Hi anita, I am sending you a link...:// Thanks JAVA statement - Java Beginners JAVA statement The import statement is always the first noncomment statement in a Java program file.Is it true? Hi Friend, No,it is not true.If your class belongs to a package, the package statement should Java Query - Java Beginners Java Query Q.What is "Identifier Expected Error" in Java and when... the code inside the class body. 3)If you will not call your method properly... will not declare your class, constructor and method properly. 5)If your class name Java Tutorial for Beginners This Java tutorial for beginners is very useful for a person new to Java. You.... First we will ensure that the Java is installed and configured on your computer..., first install and configure Java on your computer. Lets write an application Java Project - Java Beginners Java Project Dear Sir, Right now i am working in java image..., shape . So how to split one image like Texture, color, shape , kindly give your..., Please visit the following links: java - Java Beginners java your website is best look i am intension of this java... the following links: http insertionSort - Java Beginners insertionSort 1 Problem Task Your task is to develop part...)); } } For more information on Java Array visit to : Thanks Java Tutorial for Beginners in Java Programming Java Tutorial for Beginners - Getting started tutorial for beginners in Java programming language These tutorials for beginners in Java will help you.... Let's get started with the Java Tutorial for Beginners Here are the basic java - Java Beginners java how to manipulate thumb impression using java.for that what technology i should use? Give your details correctly. Cant able to understand your question Core java - Java Beginners multiple inheritance from java is removed.. with any example.. Thank you in advance for your valuable answer Hi, I am sending you...:// Thanks Hi java hi, i'm chandrakanth.k. i dont know about java. but i'm...:// project - Java Beginners java project hi this amol i wish to do a project on banking software in java can u help how can i help u .. and what the kind... adam certified java programmer certified websphere ya sure!! what java encoding - Java Beginners :// Your comments are welcome regarding...java encoding how to encode String variable to ASCII,FromBase64String,UTF8 Hi It's very easy to convert String Variable into ASCII java methods - Java Beginners java methods Hello, what is difference between a.length() and a.length; kindly can anybody explain. thanks for your time Hi Friend, length() is used to find the length of string whereas length is used Error - Java Beginners Java Error Hello Sir ,How i Can solve Following Error Can not find Symbol - in.readLine(); Here Error Display at in Hi Friend, It seems that you haven't defined the object 'in'.Anyways, Send your code so that we java programming - Java Beginners java programming heloo expert, thanks for your coding i recently asked about the question on abt how to program a atm system may i noe in which platform can i run those codes ?? netbeans ?? any suggestions core java - Java Beginners core java when write java program in editplus.then save&compile the file. 1-text file 2-class file 3-bak file how can get this files plz tell me Hi Friend, Please clarify your problem. Thanks java - Java Beginners java E:\sushant\Core Java\threading>javac -Xlint...; This warning comes when methods of old Java Versions are invoked. Just ignores this warning and run the program. Your program will successfully runs.  JAVA PROJECT - Java Beginners JAVA PROJECT Dear Sir, I going to do project in java " IT in HR, recruitment, Leave management, Appriasal, Payroll" Kindly guide me about... etc. I am doing project first time. waiting for your reply, Prashant Package in Java - Java Beginners myPackage and under that you can see your java file andd class file of packageExp... that folder of related java files. ex, if you write a jaava class like Java Compilation - Java Beginners Java Compilation How do you design a Java application that inputs a single letter and prints out the corresponding digit on a telephone? It should... for your Help Java Stack - Java Beginners Java Stack Can you give me a code using Java String STACK using the parenthesis symbol ( ) the user will be the one to input parenthesis... for your help sample output Enter a string: ()() VALID Enter a string Java Project - Java Beginners Java Project Hey, thanks for everything your doing to make us better. Am trying to work on a project in java am in my first year doing software Engineering and the project is m end of year project. Am really believing JAVA - Java Beginners JAVA Dear Sir, Kindly arrange to send urgently the answers to the following in JAVA rega1.JAVA rding True or False, please. 1. JAVA is pure object oriented Language .. True/False 2. Private number of your Class - Java Beginners java what does the following mean? 1.this 2.is-a 3.has-a Hi Friend, Please clarify your problem. Thanks java - Java Beginners java /* Write a program that converts a number entered in Roman numerals to decimal. Your program should consists * of a class, say Roman... * L 50 * X 10 * V 5 * I 1 * * d) test your program using java - Java Beginners java what is the difference between classpath and path  ... files While Classpath is Enviroment Variable that tells JVM or Java Tools where to find your classes. In PATH Environment variable you need to specify only Link List proble, - Java Beginners /java/beginners/linked-list-demo.shtml Hope that it will be helpful for you... Node to your list and Delete to your list...... my brain is bleeding can you Java Compilation - Java Beginners Java Compilation Dear Sir, Thanks for giving the code for the question which i posted. I went through the program "Write a Java program to read... that each DNA sequence is just a String). Your program should sort the sequences java - Java Beginners : C:\mywork> path c:\Program Files\Java\jdk1.5.0_09\bin;%path% 4. Compile your class(es): C:\mywork> javac *.java 5. Create... Command Prompt. 2. Navigate to the folder that holds your class files java - Java Beginners error is could not create a java virtual machine.. The error UnsupportedClassVersionError means you are trying to run Java code compiled with a new version of Java with an old version of the runtime environment. Version 49.0 java array - Java Beginners java array 1.) Consider the method headings: void funcOne(int...]; int num; Write Java statements that do the following: a. Call... and Alist, respectively. THANK YOU GUYS FOR SHARING YOUR IDEA ABOUT THIS.THANK java problem - Java Beginners java problem Suppose list is an array of five components of the type int.What is stored in list after the following Java code executes? for (i...] - 3; } THANK YOU GUYS FOR SHARING YOUR IDEA ABOUT THIS.THANK YOU SO MUCH!ILL Java - Java Beginners Please let me know the way Thanks in advance Awaiting your reply Hi Friend your first answer is absolutely right.i am...:// java program - Java Beginners java program plzzzzz help me on this java programming question? hello people.. can u plzzzzzzzzzzzzzzzzzzz help me out with this java programm. its... 1. Enter your move: D a ----------------------- Board Positions + A o java program - Java Beginners java program what is the program the a simple program in Java that allows a user to enter 3 integers representing the lengths of the sides... the following code to solve your problem. class Triangle { public static Java Array - Java Beginners Java Array Can someone help me to convert this to java? I have an array called Projects{"School", "House", "Bar"} I want all the possible... -> ["School", "House", "Bar"] Thanks in advance for your time java program - Java Beginners java program i have two classes like schema and table. schema class... name, catelogue,columns, primarykeys, foreignkeys. so i need to write 2 java... retrieves data from database(mysql). Hi friend, Please specify your java - Java Beginners java hello, Please help me with code for the following well,i am supposed to create a java program that recieves information from other programs... you. and your previous code really helped me alot thanks alot.   java - Java Beginners it is the ability to "reflect" on the structure of your program. For more information, visit the following link: Thanks java code - Java Beginners java code Write a program that performs flashcard testing of simple mathematical problems. Allow the user to pick the category. Repetitively... and have the following concepts in your program: ? Array of objects. ? Class core java - Java Beginners ; Hi friend, As per your requirement code for equilateral traiangle...:// Thanks java - Java Beginners it is the ability to "reflect" on the structure of your program. For more information, visit the following link: OOP - Java Beginners ); } For more information on Java visit to : Thanks...) { one =a; two=b; } } Hi friend, We check your code having java - Java Beginners java All programming languages have different pattern of data declaration, data types and operators it supports. Hi Friend, Please clarify your problem. Thanks Java Not running - Java Beginners Java Not running Hi, I tried to create a grade(type array...) { System.out.print("Your Final Grade is A"); } else if ((average <= 60) && (average < 70)) { System.out.print("Your Final Grade java - Java Beginners java Reading a file content in java(with concept & source code) Reading contents of any URL in java (with concept & source code) Hi...) { System.out.println("Sorry! Your given file is too large."); System.exit(0 java - Java Beginners java i have to make a programm in java to multiply any number with 100 without using any math operator.I hav got this answer. but i hav to make... to solve your problem. Here is a sample code which you can use for your help Java - Java Beginners codes (sorting and partitioning) to Java. It should be able to execute any set of data. Note:- You should provide the test data and the results of your program. (d) Modify your program to show the performance of the quicksort algorithm when java Code - Java Beginners java Code Write a Java Program that find out the age of person...{ public static void main(String[]args){ System.out.println("Enter your Date...) - cal1.get(Calendar.YEAR) + factor; System.out.println("Your age java - Java Beginners java java.lang.NoClassDefFoundError runtime exception check whatever class u used are exist or not in your current project path Thrown if the Java Virtual Machine or a ClassLoader instance tries to load java code - Java Beginners java code Sir Ineed one code for Matrix "Take a 2D array and display all elements in matrix form using only one loop " request this question i asked last one week back but Your people send me one answer Java - Java Beginners Java Dear Sir, I am working image searching in java project , in that i have to split one image like texture , color , shape then after... to what i am giving. Kindly give your suggestions and i need source coding also java - Java Beginners java write a java programe to get the multiplication table from 1 - 10.. Hi Friend, Here we are sending you a code which prints the table for any number you want to insert. Please check the code to solve your Java Program - Java Beginners Java Program Hi I have this program I cant figure out. Write a program called DayGui.java that creates a GUI having the following properties... Mnemonic B Add individual event handlers to the your program so that when java multithread - Java Beginners java multithread Hi, Thanks for your multithreading java code. It helped me a lot. But i have one issue. My code is look like this. public class classname extends Thread { public classname() { super java answerURGENT! - Java Beginners java answerURGENT! consider folowing method headings: public static... these parameters in a call to the method test? e.Write a Java statement that prints..., and 'z'. f.Write a Java statement that prints the value returned by method two Java examples for beginners of examples. Java examples for beginners: Comparing Two Numbers The tutorial provide...In this tutorial, you will be briefed about the word of Java examples, which help to understand functioning of different Java classes and way. It also java - Java Beginners in Java. PROGRAM TO IMPLEMENT LINKED LIST Implementation File... s1; node newnode=new node(); System.out.println("Input your data for your...; newnode.nxt=head.nxt; head.nxt=newnode; System.out.print("Your node java - Java Beginners java how to wtite a program that evaluates the series for some integer n input by the user where n! is a factorial of n Hi Friend, Please clarify your problem. Do you want to print the factorial of a number
http://www.roseindia.net/tutorialhelp/comment/41665
CC-MAIN-2014-35
refinedweb
3,142
56.86
A new flutter package project. For help getting started with Flutter, view our online documentation. For help on editing package code, view the documentation. Add this to your package's pubspec.yaml file: dependencies: easy_camera: ^0.0.2 You can install packages from the command line: with Flutter: $ flutter packages get Alternatively, your editor might support flutter packages get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:easy_camera/easy_camera. Fix lib/easy_camera.dart. (-0.50 points) Analysis of lib/easy_camera.dart reported 1 hint: line 20 col 24: The exception variable 'e' isn't used, so the 'catch' clause can be removed. easy_camera.dart. Package is pre-v0.1 release. (-10 points) While there is nothing inherently wrong with versions of 0.0.*, it usually means that the author is still experimenting with the general direction of the API.
https://pub.dartlang.org/packages/easy_camera
CC-MAIN-2019-04
refinedweb
150
61.02
iPcCraftController Struct Reference This is a property class used to control movement of an hovercraft. More... #include <propclass/craft.h> Detailed Description This is a property class used to control movement of an hovercraft. Allows changing yaw and pitch, as well as thruster and afterburner, and supresses angular velocity of object. This property class supports the following actions (add prefix 'cel.craft.action.' if you want to access this action through a message): - SetSliding: parameters 'enabled' (bool: optional). If 'enabled' is not given then default is true. - SetBraking: parameters 'enabled' (bool: optional). If 'enabled' is not given then default is true. - SetThruster: parameters 'enabled' (bool: optional). If 'enabled' is not given then default is true. - SetAfterBurner: parameters 'enabled' (bool: optional). If 'enabled' is not given then default is true. This property class supports the following properties: - turnmax (float, read/write): maximum turning. - turnacc (float, read/write): turning rate. - pitchmax (float, read/write): maximum pitch. - pitchacc (float, read/write): pitch rate. - roll (float, read/write): rolling ratio. - thrust (float, read/write): thruster force. - topspeed (float, read/write): thruster top speed. - atopspeed (float, read/write): afterburner top speed. - brakingspeed (float, read/write): braking speed. - decelrate (float, read/write): deceleration rate. - rvelratio (float, read/write): redirected velocity ratio. Definition at line 56 of file craft.h. Member Function Documentation Turn off afterburner. Turn on afterburner. Turn off brakes. Turn on brakes. Report whether thruster is on (true) or turned off (false). Set the objects up and down turning acceleration. Set the objects left and right turning acceleration. Set the top speed when afterburner is on. Above this speed the afterburner will be disabled. Set the braking force. It is used to slow down the craft when brakes are on. Set the deceleration rate. It is used to slow down the craft when thruster is off. Set the objects maximum up and down turning velocity. Set the objects maximum left and right turning velocity. @@@ Document me ! Set the roll factor. Roll is how much a craft rolls when turning left and right. Set the thrust force of the craft. Set the Top Speed of the thruster. Above this speed the thruster will be disabled. Turn off sliding. Turn on sliding. When sliding, the craft velocity is independent of its orientation. Start the object turning down. Start the object turning left. Start the object turning right. Start the object turning up. Stop the object turning down. Stop the object turning left. Stop the object turning right. Stop the object turning up. Turn off thruster. Turn on thruster. The documentation for this struct was generated from the following file: Generated for CEL: Crystal Entity Layer 2.0 by doxygen 1.6.1
http://crystalspace3d.org/cel/docs/online/api-2.0/structiPcCraftController.html
CC-MAIN-2014-10
refinedweb
446
64.17
I have the following script: - Code: Select all ####open_file import codecs input_file=codecs.open("corpus3_tst","r",encoding="utf-8") lines=input_file.readlines() for line in lines: line=line.rstrip() print line # define method def replace_all(text, dic): for i, j in dic.iteritems(): text = text.replace(i, j) return text # text for replacement my_text = line # dictionary with key:values. # replace values reps = {'dog':'ANIMAL', 'cat':'ANIMAL', 'pigeon':'ANIMAL'} print reps # bind the returned text of the method # to a variable and print it txt = replace_all(my_text, reps) print txt My input text looks like this: - Code: Select all dog walk 1 cat walk 2 piegon bark 3 However, the script only prints out the last line. -- and is not replacing anything... This is the result, I am getting. - Code: Select all piegon bark 3 Any insight to how I can fix this? Thank you
http://www.python-forum.org/viewtopic.php?f=6&t=8277
CC-MAIN-2016-26
refinedweb
144
65.93
IRC log of w3cdev on 2009-11-05 Timestamps are in UTC. 21:49:16 [RRSAgent] RRSAgent has joined #w3cdev 21:49:16 [RRSAgent] logging to 21:49:24 [IanJ] rrsagent, set logs public 21:50:18 [TabAtkins] TabAtkins has joined #w3cdev 21:50:35 [caribou] caribou has joined #w3cdev 21:50:40 [dom] dom has joined #w3cdev 21:51:14 [dbaron] dbaron has joined #w3cdev 21:52:33 [gerald] gerald has joined #w3cdev 21:53:10 [smfr] smfr has joined #w3cdev 21:53:22 [Tobias] Tobias has joined #w3cdev 21:53:27 [mib_igkf72] mib_igkf72 has joined #w3cdev 21:53:42 [darobin] darobin has joined #w3cdev 21:54:02 [AnnB] AnnB has joined #w3cdev 21:54:12 [darobin] is this mostly for scribing or can we also make fools of ourselves? 21:54:23 [jun] jun has joined #w3cdev 21:54:53 [timeless] ScribeNick: timeless 21:54:55 [TabAtkins] darobin: Do we need to choose one? 21:54:55 [timeless] Scribe: timeless 21:55:00 [fantasai] fantasai has joined #w3cdev 21:55:28 [timeless] RRSAgent: make minutes 21:55:28 [RRSAgent] I have made the request to generate timeless 21:56:01 [darobin] TabAtkins: well, so long as we don't talk about unicorns 21:56:21 [soonho] soonho has joined #w3cdev 21:57:00 [nathany] nathany has joined #w3cdev 21:57:38 [nathany] nathany has joined #w3cdev 21:57:52 [fantasai] RRSAgent: make minutes 21:57:52 [RRSAgent] I have made the request to generate fantasai 21:57:59 [Vagner-br] Vagner-br has joined #w3cdev 21:59:14 [nathany] nathany has joined #w3cdev 21:59:22 [annevk] annevk has joined #w3cdev 21:59:59 [Arron] Arron has joined #w3cdev 22:02:36 [IanJ] -> 22:03:28 [dom] "it's 203" — as in "203 Non-Authoritative Information" ? :) 22:03:39 [timeless] IanJ: Welcome everyone 22:03:43 [timeless] ... My name is Ian Jacobs 22:03:51 [timeless] ... welcome to the first developer meeting we've ever had 22:03:56 [timeless] ... we have a great lineup 22:04:01 [timeless] ... and i will get out of the web 22:04:05 [timeless] s/web/way/ 22:04:15 [timeless] ... your names are up here, so just get up on time 22:04:27 [timeless] ... We have w3c groups and non w3c groups represented 22:04:35 [timeless] ... timeless has accepted to scribe 22:04:48 [timeless] ... the proceedings will all be public 22:04:58 [timeless] Arun: My name's Arun for people who haven't met me personally 22:05:03 [timeless] ... I work for Mozilla on Firefox 22:05:07 [timeless] ... on evangelism 22:05:16 [krisk] krisk has joined #w3cdev 22:05:16 [maxf] maxf has joined #w3cdev 22:05:16 [timeless] ... A lot of what we do is reach out to developers 22:05:21 [timeless] ... to see what we should be doing 22:05:28 [timeless] ... How many people are web developers? 22:05:34 [timeless] ... that's the lion's share 22:05:44 [timeless] ... How many people are in the business of developing web applications? 22:05:48 [timeless] ... interesting a smaller number of hands 22:06:07 [timeless] ... Both as part of my work at mozilla and as someone who works on standards 22:06:16 [myakura] myakura has joined #w3cdev 22:06:24 [timeless] ... I'm an author of a spec in W3C ... File API (?) 22:06:36 [smfr] yes, File APIO 22:06:36 [timeless] ... I also work on a spec outside W3C WebGL 22:06:37 [smfr] er, API 22:06:46 [timeless] ... We work in Web Apps WG 22:06:58 [timeless] ... You should be able to access databases on a client just as you can on a server 22:07:11 [timeless] ... Today you guys have come to the hallowed precincts of a sausage factory 22:07:19 [timeless] ... you're actively following a list-serve 22:07:26 [timeless] ... a lot of developers out there don't follow 22:07:36 [timeless] ... to them, they may be happy to take an API and run with it 22:07:49 [timeless] ... We'd like to get them more involved in standards 22:07:54 [marie] marie has joined #w3cdev 22:08:06 [timeless] ... For examples, two browser vendors released browsers with a SQL API 22:08:20 [timeless] ... but two vendors: Mozilla and Microsoft indicated they don't want to do this 22:08:33 [timeless] ... We got feedback from developers indicating they didn't want a SQL API 22:08:42 [timeless] ... ... In html5 and web apps 22:08:49 [timeless] ... How many people recognize this movie? 22:08:56 [timeless] [Back to the future Movie picture] 22:09:04 [timeless] ["Where we're going, we don't need roads"] 22:09:23 [timeless] ... I wanted to condense how standards work into one day 22:09:35 [timeless] ... In fact, there's a story on CNet referring to a date in 2012 22:09:51 [timeless] ... So I thought about last night how to condense this into a day 22:09:59 [timeless] [Slide: Morning, Afternoon, Evening] 22:10:04 [timeless] ... Basic rules of thumb on Time 22:10:10 [timeless] ... Morning is the period that already passed 22:10:15 [timeless] ... Morning is a little while ago 22:10:27 [timeless] ... Features that are already available in browsers today 22:10:48 [timeless] ... The afternoon is things you can build on right now, but you may not be able to do that in a cross platform manner 22:11:09 [timeless] ... How many people do things for some platforms and worry about other browsers (esp IE6) 22:11:17 [timeless] ... The evening is stuff that has yet to come 22:11:22 [timeless] ... it holds in it the promise of a good night 22:11:31 [timeless] ... it holds in it the promise of things that are still being fleshed out 22:11:41 [timeless] ... it's a bit more than that, but it's not something that you can rely on in your bag of tricks 22:11:51 [timeless] ... The morning includes things like LocalStorage 22:12:04 [timeless] ... It's supported in IE8, Opera, Safari, Firefox 3.X (?) 22:12:11 [timeless] ... As for XMLHttpRequest 22:12:12 [AtticusLake] AtticusLake has joined #w3cdev 22:12:23 [timeless] ... it's implemented in most browsers (?) 22:12:33 [timeless] .. and XDomainRequest (in IE8) 22:12:48 [timeless] ... and they're implemented with the same security approach (?) 22:13:10 [timeless] ... And you can grab me and we can look at snippets of code where I can show you how you can do this in a cross browser manner 22:13:20 [timeless] ... postMessage is also available in IE8, Firefox, ... 22:13:24 [benjick] benjick has joined #w3cdev 22:13:31 [timeless] ... CSS2.1 support, reasonably good, even in IE 22:13:39 [timeless] ... you can look to it for better support than before 22:13:41 [krisk] IE8 22:13:50 [timeless] s/in IE/in IE8/ 22:13:56 [timeless] ... That's the morning 22:14:03 [timeless] ... The afternoon gets a little bit more interesting 22:14:13 [timeless] ... it introduces more things about the platform 22:14:20 [timeless] ... things aren't as mature 22:14:32 [timeless] ... supported in Firefox (25% of the market) 22:14:37 [timeless] ... Safari, and Chrome 22:14:45 [timeless] [Slide: Afternoon] 22:14:49 [timeless] ... HTML5 Canvas 22:14:52 [timeless] ... HTML5 Video 22:14:57 [timeless] ... HTML5 Drag and Drop 22:15:00 [Tobias] Is this slide online somewhere? 22:15:02 [timeless] .... CSS WebFonts (...) 22:15:11 [timeless] ... and geolocation 22:15:18 [timeless] [ Demo ] 22:15:29 [timeless] ... This demo brings together a lot of pieces of the platform 22:15:36 [timeless] ... it sorta does, but the color isn't so great 22:15:43 [timeless] ... I've got a colleague of mine 22:15:46 [Kangchan] Kangchan has joined #w3cdev 22:15:57 [timeless] ... doing a fanning gesture between two iPhones 22:16:04 [mauro] mauro has joined #w3cdev 22:16:06 [azunix] azunix has joined #w3cdev 22:16:13 [timeless] ... if I click here, I've got a video supplanted between the two iPhones 22:16:32 [dom] Hi Developepers! 22:16:34 [timeless] ... this only works if video is part of the browser, I can't do this with Flash 22:16:40 [timeless] s/pep/p/ 22:16:49 [IanJ] demo dynamically injecting content into a canvas (within a video element) 22:16:58 [timeless] ... I can also embed a video inside the animation 22:17:06 [timeless] ... This works when video is a first class citizen of the web 22:17:10 [timeless] ... and this comes from html5 22:17:12 [IanJ] Arun: You get this flexibility when video is a first-class citizen of hte web 22:17:14 [timeless] ... here's another demo 22:17:34 [JonathanJ] JonathanJ has joined #w3cdev 22:17:44 [JonathanJ] rrsagent, draft minutes 22:17:44 [RRSAgent] I have made the request to generate JonathanJ 22:17:49 [timeless] ... The color isn't great 22:17:51 [timeless] ... [about the demo] 22:18:00 [timeless] ... the pixels of the video are dumped into a canvas 22:18:10 [timeless] ... i'm going to label extracted bits 22:18:19 [sof] sof has joined #w3cdev 22:18:20 [timeless] [names which people might not recognize] 22:18:39 [tantek] tantek has joined #w3cdev 22:18:53 [timeless] [ fwiw, this usually works better, but for some reason one of the pins on the cable seems to be doing strange things ] 22:19:49 [timeless] [arun uses the demo to establish facial recognition] 22:20:12 [timeless] ... this demo uses localStorage 22:20:17 [timeless] ... because i switch between two urls 22:20:24 [JonathanJ] 22:20:49 [timeless] ... I have a lot of demos, please come visit me later 22:21:03 [dsr] dsr has joined #w3cdev 22:21:03 [timeless] ... this is in the context of "the afternoon" 22:21:10 [timeless] ... I wanted to show you this demo 22:21:19 [timeless] [IanJ: t-minus 10 minutes] 22:21:24 [dom] [very impressive demon for video] 22:21:29 [timeless] ... I'm going to open font book, a handy dandy application on my mac 22:21:37 [dom] s/demon/demo/ 22:21:44 [timeless] ... what i'm going to do now 22:21:51 [timeless] ... is show a bunch of technologies working together 22:21:55 [timeless] ... HTML5 Drag and Drop 22:22:00 [timeless] ... CSS Font Face 22:22:04 [timeless] ... HTML localStorage 22:22:12 [timeless] ... X (?) 22:22:20 [timeless] ... I'm going to drag a font onto the page 22:22:26 [timeless] ... and dropped it onto the page 22:22:38 [timeless] ... and the page restyled itself using the font 22:22:46 [timeless] [drags a font onto the page] 22:23:02 [timeless] [arun misses the drop target] 22:23:02 [JonathanJ] 22:23:14 [timeless] ... I'm going to drop in garamond 22:23:23 [timeless] ... now you can see the page has taken on a different look 22:23:28 [timeless] ... this is directly using localStorage 22:23:33 [timeless] ... to store the stuff that I dragged 22:23:44 [timeless] ... it's using the drag and drop api 22:23:54 [timeless] ... and it's using font-face to set the font 22:24:04 [timeless] ... and it's using contentEditable 22:24:11 [timeless] s/X (?)/contentEditable/ 22:24:20 [timeless] ... if i got to a page flickr.com/maps 22:24:26 [timeless] ... I can share my location 22:24:39 [timeless] [ Arun shares our location ] 22:24:52 [timeless] ... this is the GeoLocation API introduced in Firefox 3.5 22:25:10 [timeless] .... and I can drag my mouse over locations and see pictures from there 22:25:21 [timeless] ... this is flikr using pieces of an API that recently became a spec 22:25:26 [r12a-nb] r12a-nb has joined #w3cdev 22:25:28 [timeless] ... and now... "Evening" 22:25:37 [timeless] ... We're looking at pushing the hardware 22:25:43 [brutzman] brutzman has joined #w3cdev 22:25:45 [timeless] ... we're discussing storage 22:25:52 [timeless] ... and orientation events (angles) 22:25:58 [timeless] ... multitouch 22:26:14 [timeless] [Arun holds up an n900 proto] 22:26:22 [timeless] ... Firefox Mobile will ship for this device 22:26:34 [timeless] [Arun demos playing a game by tilting his macbook pro] 22:26:47 [timeless] ... this is a pretty popular go-kart game 22:27:12 [timeless] [ demos an expanding red panda but tilting his laptop] 22:27:19 [timeless] ... this is stuff we'd like to do in the evening 22:27:24 [timeless] ... pending discussion with other folks 22:27:36 [timeless] ... this is stuff in the evening, the promise of a tomorrow, or a tomorrow morning 22:27:49 [timeless] ... and of course there's 3d graphics 22:27:52 [timeless] ... 3d graphics are extensions of the html5 canvas element 22:28:00 [timeless] ... and exposes a new way to do hardware accelerated 3d graphics 22:28:13 [Kangchan] Kangchan has joined #w3cdev 22:28:13 [timeless] ... these are the things i'm talking about from the promise of an evening 22:28:26 [timeless] Van Ryper (Krillion): 22:28:40 [timeless] ... I've heard a lot about the web3d consortium 22:28:49 [timeless] Arun: the deliverable of web3d (x3d) 22:28:58 [timeless] ... is an interchange format that represents 3d graphics 22:29:06 [timeless] ... it's the ability for javascript to parse such graphics 22:29:15 [timeless] ... and use webGL to expose those graphics 22:29:21 [timeless] Robin: and someone's done that 22:29:27 [timeless] Arun: X3D OM 22:29:42 [timeless] ... The promise of today is that javascript's performance have improved so much 22:29:58 [timeless] IanJ: so... Don Bredsman a chance to speak 22:30:11 [timeless] DonB: We'll be showing this tomorrow morning at X-oclock 22:30:25 [smfr] 9-oclock 22:30:32 [caribou] s/Bredsman/Brutzman 22:30:34 [darobin] X3DOM: 22:30:37 [timeless] Tom Strobers (user): 22:30:44 [timeless] ... Java - JavaScript ? 22:30:52 [krisk] ...in the HTML5 WG Meeting 22:30:52 [timeless] Arun: that's an interesting question 22:31:01 [timeless] ... and i'll speak in a continum 22:31:11 [timeless] ... java has historically been used as a technology that can be used anywhere 22:31:16 [timeless] ... and so in fact can javascript 22:31:22 [timeless] ... javascript as it now runs in browsers 22:31:38 [timeless] ... and browsers run on mobile devices 22:31:46 [sof] sof has left #w3cdev 22:31:51 [timeless] Fantasai: JavaScript and Java has no relation 22:31:58 [timeless] ... JavaScript runs natively in the browser 22:32:11 [timeless] ... whereas java runs separetly 22:32:28 [timeless] s/separetly/separately/ 22:32:38 [timeless] Arun: JavaScript is the defacto language of the web 22:32:53 [timeless] IanJ: I'll see if we can talk about moving things into programming languages and out of declarative languages 22:32:57 [timeless] ... I want to keep things moving 22:33:06 [timeless] ... we have a lot of interesting speakers 22:33:10 [timeless] ... we have 3 bottles of wine 22:33:26 [timeless] ... You can use a business card, or just a piece of paper 22:33:34 [timeless] ... thank you Arun 22:33:41 [timeless] ... fantasai come up 22:33:53 [timeless] fantasai: I'm an invited expert of the CSS WG 22:34:05 [timeless] ... I've brought along a number of people from the CSS WG with exciting demos 22:34:24 [timeless] ... first speaker is David Baron 22:34:37 [timeless] ... he writes specs, makes interesting comments, ... 22:34:40 [timeless] [ laughter ] 22:34:55 [timeless] dbaron: ... 22:35:08 [timeless] ... there've been a lot of demos of css stuff floating around lately 22:35:20 [timeless] ... i've wanted to demo a few features that are not the ones that get the most press 22:35:29 [timeless] ... the stuff that people demo are these new visual effects 22:35:39 [timeless] ... shadows, columns, rounded corners, transforms 22:35:43 [timeless] ... one of them is border image 22:35:55 [timeless] ... the ability to take an image and then logically that image gets split up into 9 pieces 22:36:07 [timeless] ... and then you can use those slices to form the border of something else 22:36:14 [timeless] [demo] 22:36:24 [timeless] ... and this will now resize as i resize the window 22:36:31 [timeless] ... another feature that's been in specs for almost 10 years 22:36:38 [timeless] ... but that hasn't been implemented until recently 22:36:41 [timeless] ... is font-size-adjust 22:36:50 [timeless] ... it lets you get better font behavior 22:37:00 [AtticusLake] AtticusLake has left #w3cdev 22:37:03 [timeless] ... one of the problems is that font size is whatever the font designer wants it to mean 22:37:08 [IanJ] rrsagent, make minutes 22:37:08 [RRSAgent] I have made the request to generate IanJ 22:37:10 [timeless] ... what font-size-adjust lets you do 22:37:25 [timeless] ... is that instead of letting font size do what it does 22:37:51 [timeless] ... font-size-adjust lets you operate on the x height 22:38:01 [timeless] [ demo of font-size-adjust ] 22:38:12 [timeless] ... another feature that's now pretty widely implemented 22:38:16 [timeless] ... Mozilla, WebKit, and Opera 22:38:20 [timeless] ... are CSS Media Queries 22:38:29 [timeless] ... which let you change the style of the web page 22:38:37 [timeless] ... based on characteristics of the thing that's displaying it to you 22:38:51 [timeless] ... so you can change the page based on, e.g. the width of the window 22:39:11 [timeless] ... e.g. you can specify something that only operates for windows >22em's wide 22:39:22 [timeless] ... the final thing, is a feature that I think is only implemented in Mozilla 22:39:43 [timeless] ... many designers have struggled with css using intrinsic widths in basic ways 22:39:52 [timeless] ... there's the longest word of the line 22:39:57 [timeless] ... and (??) 22:40:07 [timeless] ... so you can say the width is the min-content-width 22:40:13 [timeless] ... or the width is the mac-content-width 22:40:19 [timeless] ... or that it fits the container 22:40:24 [timeless] ... which is the same algorithm used for tables 22:40:34 [timeless] fantasai: we'll have a couple of questions after each talk 22:40:40 [timeless] dbaron: questions now... 22:40:47 [timeless] VanR: 22:40:54 [timeless] ... how pervasive are things 22:41:11 [timeless] dbaron: most of these that i've demo'd are supported in Firefox, Opera and Safari, but not IE 22:41:24 [IanJ] quirksmode.com suggested 22:41:26 [timeless] VanR: ... meta question, is there a way to see supported list 22:41:33 [timeless] Tab Atkins: 22:41:42 [timeless] ... quirksmode ... and ... 22:41:47 [mauro] mauro has left #w3cdev 22:41:50 [cappert] cappert has joined #w3cdev 22:41:52 [timeless] unknown-speaker: 22:41:58 [timeless] s/speaker/questioner/ 22:42:03 [timeless] ... status of a test suite? 22:42:04 [IanJ] s/unknown-speaker/Ian: 22:42:13 [timeless] fantasai: we're working on it 22:42:19 [timeless] ... next speaker is Tab Atkins 22:42:22 [timeless] TabA: 22:42:33 [timeless] ... i just got brought in based on my work on the gradient spec for CSS3 22:42:44 [caribou] s/unknown-questioner/Ian/ 22:42:46 [timeless] ... I'm going to take this opportunity to go over how spec work is done 22:43:04 [timeless] ... because page designers often wonder how to get things added 22:43:06 [timeless] ... steps: 22:43:16 [timeless] ... look at the problem and figure out what the actual problem is 22:43:17 [dbaron] my slides were at 22:43:29 [timeless] ... and then there's a mailing list 22:43:35 [timeless] ... it's a public list 22:43:43 [timeless] ... I have an example 22:43:43 [Tobias] thanks dbaron 22:43:45 [timeless] ... gradients 22:43:57 [timeless] ... Safari introduced experimental support for css gradients in 2008 22:44:04 [timeless] ... I don't know if these will work in Chrome 22:44:12 [timeless] ... that's ok... I have other things that will work 22:44:27 [timeless] ... We kick things around on the mailing list 22:44:35 [timeless] ... later Mozilla created something similar 22:44:38 [smfr] TabAtkins was showing: 22:44:45 [timeless] ... they said that they didn't like the way it was done 22:44:58 [benjick] IE have had CSS gradients for ever! 22:44:59 [timeless] ... each vendor uses its own prefix (-webkit-, -moz-) 22:45:01 [smfr] now showing: 22:45:07 [timeless] ... not the bad old browser wars 22:45:28 [timeless] ... gradients can be done in CSS, performantly 22:45:36 [timeless] ... without the network bandwidth 22:45:46 [timeless] ... the problem with gradients, was the syntax 22:45:52 [timeless] ... we kicked it around on the mailinglist 22:46:00 [timeless] ... it's all public, you can read it on the mailinglist 22:46:07 [timeless] ... what it ended up with was a proposal by me 22:46:14 [Arron] www-style archive : 22:46:14 [timeless] ... i just proposed it on the list 22:46:22 [timeless] ... it grew out of discussions with people 22:46:30 [smfr] Current proposal:- 22:46:30 [timeless] ... talking about what people were trying to do 22:46:41 [timeless] [shows the detailed description of the proposal ?] 22:46:50 [timeless] ... these are based on the firefox implementation 22:46:55 [timeless] ... this is a Minefield build of Firefox 22:46:58 [timeless] ... it's now in the nightlies 22:47:07 [timeless] dbaron: it will be in Firefox 3.6 22:47:12 [timeless] ... as of a few hours ago 22:47:23 [timeless] TabA: so 3.6 will have the new syntax 22:47:31 [timeless] ... so you can do things like you want to do 22:47:49 [timeless] [ demos creating _beautiful_ gradients ] 22:47:55 [timeless] ... this is directly using the syntax 22:48:03 [smfr] TabAtkins is testing with 22:48:05 [timeless] ... i'm just using js to set the background 22:48:23 [timeless] Robin: ... do you have demos? 22:48:30 [timeless] TabA: it's an open problem 22:48:37 [timeless] ... it shows the evolution of an idea 22:48:41 [darobin] s/do you have demos/do you have demos where it's animated/ 22:48:42 [timeless] ... from someone identifying a problem 22:48:45 [timeless] ... to implementations 22:48:56 [timeless] ... to implementations(?) on the style list 22:49:10 [timeless] ... to proposals(?) 22:49:32 [timeless] ... so if you have a problem 22:49:36 [timeless] ... you tell us about i 22:49:40 [timeless] s/i/it/ 22:49:50 [timeless] ... we kick it around 22:49:54 [mcf] mcf has joined #w3cdev 22:49:56 [timeless] ... or we decide to put it off until later 22:50:02 [timeless] ... browser developers are not always page authors 22:50:14 [timeless] fantasai: coming to a browser near you 22:50:18 [timeless] ... next up, Simon Fraser 22:50:26 [timeless] ... giving a demo of transforms and transitions 22:50:36 [timeless] ... these are new drafts that are coming up 22:50:48 [timeless] ... Simon works for Apple on WebKit, and used to work for Mozilla 22:50:55 [timeless] Simon= smfr 22:51:00 [timeless] smfr: so... 22:51:05 [TabAtkins] darobin, I failed! 22:51:07 [timeless] ... with transitions and transforms 22:51:23 [timeless] ... this is some content that we put together using assets from a band called willco (?) 22:51:32 [timeless] ... as I hover over things on the left 22:51:40 [timeless] ... you see transitions 22:51:47 [timeless] [ shows the basic bits ] 22:51:56 [timeless] ... we've got a standard color 22:52:05 [timeless] ... and the nice red color? 22:52:10 [timeless] ... using completion in textmate 22:52:17 [timeless] ... let's put a transition right here 22:52:22 [timeless] ... over one second 22:52:29 [timeless] ... so now when i go back to the page 22:52:40 [timeless] ... you can see the transition 22:52:52 [timeless] ... transitions take a comma separated list 22:53:12 [timeless] ... another thing that dbaron mentioned was transforms 22:53:19 [timeless] ... so let's put a hover on the transform 22:53:30 [timeless] ... a precanned rotate of say X degrees 22:53:39 [timeless] ... and now let's make this nice and smooth 22:53:45 [timeless] ... so let's say ... .5 22:53:48 [timeless] s/5/5s/ 22:53:56 [timeless] ... you can use milliseconds too 22:54:05 [timeless] ... so let's go back to our original page 22:54:08 [timeless] ... but this slideshow 22:54:18 [timeless] ... you can have crossfades 22:54:24 [timeless] ... we can use a translate 22:54:29 [timeless] ... and vertical scales 22:54:37 [timeless] ... that's a keyframe animation 22:54:42 [timeless] ... it's a little bit more complex 22:54:51 [timeless] ... they're not as advanced as transitions 22:54:56 [timeless] ... we can do a spin 22:55:04 [timeless] ... we're also proposing a 3d transform 22:55:18 [timeless] ... we're rotating around the vertical axis with some perspective 22:55:29 [timeless] IanJ: so are all the images there? 22:55:42 [timeless] smfr: yes, it's all there, it's just css classes tweaking 22:55:52 [timeless] Dan (HP): 22:56:03 [timeless] ... we've got all these transforms that we can use on a page 22:56:06 [timeless] ... we've also got canvas 22:56:12 [timeless] ... why should you use one or the other 22:56:27 [timeless] smfr: with canvas, you draw and then don't know what's there 22:56:41 [AtticusLake] AtticusLake has joined #w3cdev 22:56:42 [timeless] ... with transforms, you aren't making the content more opaque 22:56:45 [timeless] ... you still have links 22:56:56 [timeless] ... we've also got examples of applying 3d to slices of a page 22:57:07 [timeless] ... and revealing things from the page 22:57:22 [timeless] ... and we can hover over here and see hit testing still works 22:57:36 [AtticusLake] AtticusLake has left #w3cdev 22:57:49 [timeless] ... all done with css transforms 22:57:53 [timeless] ... we've done this inside apple 22:58:00 [timeless] ... this demo was done by charles ying (?) 22:58:03 [timeless] ... outside apple 22:58:12 [timeless] ... it uses flikr to fetch images as a wall 22:58:21 [timeless] ... hold keys down, move backwards and forwards 22:58:43 [timeless] [ wiring glitch, we dimmed another room ] 22:58:52 [timeless] ... thanks 22:58:56 [timeless] fantasai: so that's our three speakers 22:59:01 [timeless] ... we also have people from Microsoft here 22:59:10 [timeless] ... we've got a bunch of members from the CSS WG 22:59:18 [timeless] Bernard Ling (?): 22:59:26 [timeless] ... when does it appear in ie6? [just kidding] 22:59:41 [timeless] dbaron: Mozilla has 2d transforms in FF3.5 22:59:49 [timeless] ... transitions will be in FF3.7 23:00:03 [timeless] smfr: 2d transforms should be identical in behavior 23:00:19 [timeless] dbaron: 3d would be after 3.7 23:00:28 [dbaron] if we did it 23:00:29 [timeless] IanJ: how to tell css wg your idea's 23:00:35 [timeless] fantasai: [email protected] 23:00:37 [ademoissac] ademoissac has joined #w3cdev 23:00:52 [timeless] xx-? : 23:01:04 [timeless] ... is there any work being done on opacity across browsers 23:01:13 [timeless] ms-1: 23:01:16 [dbaron] 23:01:21 [timeless] ... currently it's available through filters 23:01:31 [timeless] ... it's a proprietary property 23:01:39 [timeless] ... we can sit down and i can try to help you with it 23:01:43 [timeless] ... moving forward in the future 23:01:50 [timeless] ... we can see about looking to expand 23:02:04 [timeless] fantasai: css3 color is a CR 23:02:09 [timeless] ... there's an opacity property 23:02:18 [timeless] ... I believe WebKit, Gecko and Opera 23:02:27 [timeless] IanJ: ok... 23:02:38 [timeless] RRSAgent: make minutes 23:02:38 [RRSAgent] I have made the request to generate timeless 23:03:13 [fantasai] Thanks everyone, your talks were amazing! :) 23:03:23 [dom] very impressive, indeed 23:03:35 [timeless] Next Speaker: Philippe Le Hégaret (W3C) 23:03:46 [timeless] IanJ: Philippe has worked on building a test suite 23:03:56 [timeless] Philippe: let's talk about something embarassing 23:03:58 [timeless] ... testing at w3c 23:04:08 [timeless] ... i'm responsible for [long list of wg's] 23:04:15 [AtticusLake] AtticusLake has joined #w3cdev 23:04:19 [timeless] ... one of my plans is how do we test all that 23:04:28 [timeless] ... talking about testing at w3c 23:04:30 [AtticusLake] AtticusLake has left #w3cdev 23:04:34 [smfr] smfr has joined #w3cdev 23:04:39 [timeless] ... we already have plenty of test suites at w3c 23:04:51 [timeless] ... css1, ... dom event 2, css2.1, ... 23:05:04 [timeless] ... why do we have those test suites? 23:05:07 [dom] s/dom/DOM/ 23:05:15 [timeless] ... one of those reasons is that in 1999 in the DOM working group 23:05:30 [timeless] ... we came up with this phase called Candidate Recommendation (CR) 23:05:44 [timeless] ... "we think we're done", but now we want to prove that we're actually done 23:05:50 [timeless] ... this came out of the DOM WG 23:06:02 [timeless] ... to come out of the phase 23:06:14 [timeless] ... the WG _should_ come out with two implementations of each feature 23:06:20 [karl] we need to brush up the matrix 23:06:26 [timeless] ... it's a negotiation with the [TBL role] 23:06:49 [timeless] ... what working groups tend to do 23:07:00 [timeless] ... is just demonstrate that each feature has been implemented 23:07:07 [timeless] ... do they actually do this? 23:07:13 [timeless] ... no, they don't have enough resources 23:07:20 [timeless] ... and no one really wants to write tests 23:07:29 [timeless] ... but do we get interoperability on the web? 23:07:34 [timeless] ... and i would argue no 23:07:37 [dom] karl, I actually updated it (somewhat) a few weeks ago 23:07:48 [timeless] ... how can we make the web a better place? 23:07:51 [timeless] ... w3c has limited resources 23:07:53 [timeless] ... yes we have microsoft 23:07:58 [timeless] ... but we have limited amount of time 23:08:07 [timeless] ... limited amount of budget, for product teams as well 23:08:08 [plinss] plinss has joined #w3cdev 23:08:14 [timeless] ... so what we really want is the community to help us 23:08:18 [timeless] ... tell us what works 23:08:22 [timeless] ... you run into problems all the time 23:08:25 [timeless] ... tell us about it 23:08:31 [timeless] ... can you please submit a test about it? 23:08:41 [timeless] ... what i'd like to see is the community help us 23:08:46 [timeless] ... let's make it a bit harder 23:09:06 [timeless] [ slide: svg, mathml, video, ...] 23:09:44 [timeless] ... I can manipulate the DOM tree 23:09:46 [karl] dom, excellent 23:09:54 [timeless] s/dom, excellent// 23:10:14 [timeless] ... if i want to play a video, i just click a button which is just a thing with css style on it 23:10:19 [timeless] ... and it will work 23:10:24 [timeless] ... but who is going to test all this? 23:10:44 [timeless] ... while we have produced some test suites 23:11:00 [timeless] ... we haven't produced combinations of specs 23:11:04 [timeless] ... css+svg+ ... 23:11:11 [timeless] ... so how do we test that? 23:11:19 [timeless] ... first we need to test the parsers 23:11:47 [timeless] ... we need to guarantee that the document you're righting will generate one single DOM 23:11:52 [fantasai] s/righting/writing/ 23:12:04 [timeless] ... how do we test dynamic scripting 23:12:09 [timeless] ... if i want to test a css animation 23:12:17 [timeless] ... how do i test it if it's 3 seconds 23:12:26 [timeless] ... i don't want to test just the first frame and the last frame 23:12:35 [timeless] ... we need to understand that there are limitations 23:12:42 [timeless] ... it's impossible to test everything 23:12:48 [timeless] ... and we have to acknowledge that 23:12:53 [timeless] ... but at the same time 23:12:58 [timeless] ... we need to do something 23:13:02 [timeless] ... the most common thing 23:13:09 [timeless] ... is a test that requires a human 23:13:20 [timeless] ... a "self describing test" 23:13:31 [timeless] ... [ pass with green, fail with red ] 23:13:38 [timeless] ... we can also test plain text output 23:13:42 [timeless] ... we can compare screen shots 23:13:48 [timeless] ... if you have for example in svg 23:13:54 [timeless] ... we know exactly what the output should be 23:14:03 [timeless] ... if you have a rectangle, we know what it should be 23:14:12 [timeless] ... we can take a screen capture 23:14:14 [mcf] mcf has joined #w3cdev 23:14:18 [timeless] ... with fonts, it's different 23:14:22 [timeless] ... what dbaron did 23:14:31 [timeless] ... is that instead of trying to write tests 23:14:43 [timeless] ... is how about we write two pages that should have the same rendering 23:14:48 [timeless] ... using different features 23:14:52 [timeless] ... that's called reftests 23:15:06 [timeless] ... the advantage is that it can be cross platform/browser 23:15:10 [fantasai] s/write tests/ write tests to match a static image/ 23:15:17 [timeless] ... with webkit, you can 23:15:29 [fantasai] ScribeNick: fantasai 23:15:30 [timeless] ... do a dump of a dom tree 23:15:41 [IanJ] rrsagent, make minutes 23:15:41 [RRSAgent] I have made the request to generate IanJ 23:15:43 [fantasai] ... and there are probably other ways that I'm not aware of. 23:15:44 [dom] -> Webkit DumpRenderTree, an example of layout tree comparison 23:15:54 [fantasai] ... one of the things I've been tryin got push inside the consortium is to have a browser testing framework 23:16:08 [fantasai] ... that other groups can use. They can choose a method to test their specification. 23:16:14 [fantasai] ... we want to make this as automatic as possible. 23:16:18 [fantasai] ... we need to produce a lot of tests. 23:16:32 [fantasai] ... e.g. Microsoft submitted 7000 tests that were all self-describing tests 23:16:36 [fantasai] ... that is not scalable 23:16:43 [fantasai] ... it takes a long time to go through those tests 23:16:46 [cappert] cappert has joined #w3cdev 23:17:00 [fantasai] ... because of our limited resources, we need to produce a mechanism to help our working groups 23:17:12 [fantasai] ... if they are reviewing tests, they are not writing specs 23:17:20 [fantasai] ... they should just be able to focus on controversial tests 23:17:33 [fantasai] ... if others can submit a test, then we can look if there's a problem 23:17:43 [fantasai] ... we can also see if its a bug in the browser, and they have to change their impl 23:17:59 [fantasai] ... We also have to be careful here, because if the tests are wrong we get interop on the wrong behavior! 23:18:26 [fantasai] ... We need to have testing for all these technologys, not just one of them, or each separately 23:18:30 [fantasai] ... but all of them togethers 23:18:44 [fantasai] ... with HTML5 normatively integrating with SVG and MathML, we need to test them together, not just each on the side. 23:18:47 [midnaz] midnaz has joined #w3cdev 23:18:51 [fantasai] ... We need to be able to test HTML inside SVG 23:19:00 [smfr] and SVG inside HTML inside SVG 23:19:01 [fantasai] ... As I said there are multiple ways to test a browser, and we should allow more than one 23:19:19 [fantasai] ... The browser implementors are not going to rewrite all their tests for us 23:19:35 [fantasai] ... but agree on some common formats so that we can all share tests 23:19:51 [fantasai] ... We also need to have a life after the Recommendation stage 23:20:02 [fantasai] ... the specs still exist after Rec, and we need to continue testing them 23:20:13 [fantasai] ... I don't want W3C to run that test suite. We don't have the resources. 23:20:24 [fantasai] ... We can't buy 100 servers and run tests on every possible version of everybrowser 23:20:37 [timeless] s/everybrowser/every browser/ 23:20:38 [LeslieB] LeslieB has joined #w3cdev 23:20:41 [fantasai] ... So we want to allow others to run the tests. To run screen shots on their own computer 23:20:57 [fantasai] ... THere are some difficulties. E.g. I don't know how to take a screen shot by script on all the platforms 23:21:01 [timeless] s/THere/There/ 23:21:05 [fantasai] ... What happens then? 23:21:11 [fantasai] ... We can make the test results useful to you. 23:21:24 [fantasai] ... Show reports of what works, and what doesnt. Let's make the test suites useful for the community as well. 23:21:32 [timeless] s/doesnt/doesn't/ 23:21:33 [fantasai] ... And we should improve our validators at W3C. 23:21:43 [fantasai] ... Maybe make it use test results. 23:21:58 [fantasai] ... e.g. it notices You are using this feature, note that it doesn't work on FF3.6! 23:22:13 [fantasai] ... We're not a lone, there are others who are trying to do the same thing. 23:22:21 [timeless] s/a lone/alone/ 23:22:39 [fantasai] ... test swarm for example is an effort from jQuery author, because he was running into the same problem 23:22:54 [fantasai] ... he cannot run every OS /browser combination himself 23:23:12 [fantasai] ... browserscope is interesting too. It allows you to compare screenshots across platforms 23:23:36 [fantasai] ... It uses a web server locally to determine when to take the screen shot 23:23:49 [fantasai] ... We need to produce these tools incrementally 23:23:58 [fantasai] ... and try to get them to work on all borwsers 23:24:07 [timeless] s/borwsers/browsers/ 23:24:10 [fantasai] ... I think the message that I like you to get out of this is that we need help. 23:24:29 [fantasai] ... I can get some help from browser vendors, but ultimately we need help from the community because you are the ones suffer every day. 23:24:41 [fantasai] ... and until you tell us what is wrong, we are not able to help you 23:24:43 [timeless] s/suffer/suffering/ 23:24:44 [caribou] For the record, Help W3C Validators program at 23:24:55 [fantasai] Ian: Questions for Philippe? 23:24:55 [darobin] interesting article on mobile testing: 23:25:04 [Liam] Liam has joined #w3cdev 23:25:04 [fantasai] Dianna Adair: 23:25:22 [timeless] RRSAgent: make minutes 23:25:22 [RRSAgent] I have made the request to generate timeless 23:25:22 [fantasai] Dianna Adair: Could there be any hooks in the syntax so that you can pass arguments to the syntax automaticall, through some sort of test generation program 23:25:31 [fantasai] Dianna Adair: Are there valid simulators for the major browsers? 23:25:43 [timeless] s/automaticall/automatically/ 23:25:51 [fantasai] Dianna: So that you can push the tests agains the simulated suite of browsers 23:25:58 [timeless] s/agains/against/ 23:26:11 [fantasai] plh: FOr the first question, yes, because we are starting from scratch 23:26:19 [timeless] s/FOr/For/ 23:26:21 [fantasai] plh: For the other we can get screenshots of the major browsers 23:26:26 [henriquev] henriquev has joined #w3cdev 23:26:41 [fantasai] plh: browsertest.org was done by an engineer in Switzerland 23:26:54 [timbl] timbl has joined #w3cdev 23:26:56 [IanJ] rrsagent, make minutes 23:26:56 [RRSAgent] I have made the request to generate IanJ 23:26:59 [fantasai] plh: At the beginning of Sept. a few folks including me and a few Moz developers got together and started writing code to do that 23:27:04 [dom] -> BrowserTests.org 23:27:15 [fantasai] plh: We made a prototype that works on the 3 major platforms 23:27:23 [fantasai] plh: He is improving his browser test framework 23:27:34 [fantasai] plh: At W3C we have a way to do human testing, I showed a demo of the mobile web browser 23:27:40 [fantasai] plh: It requires a human to click pass fail pass fail 23:27:49 [timeless] s/browsertest.org/browsertests.org/ 23:27:54 [fantasai] Dianna: One way I've seen that works, is to set up a location and have some sort of "bugfest" 23:28:22 [fantasai] Dianna: You have people all over the world trying to test things simultaneously 23:28:24 [fantasai] plh: ... 23:28:36 [fantasai] plh: My goal is not to point fingers at browsers and tell them they're doing bad stuff 23:28:39 [fantasai] plh: I want to serve the community 23:28:56 [fantasai] IanJ: Have you set up a mailing list or public place for people to come help out? 23:28:59 [fantasai] plh: Not yet 23:29:05 [fantasai] IanJ: ACTION Phillipe? :) 23:29:13 [fantasai] plh: We need to create a group within W3C itself 23:29:24 [fantasai] plh: I know for example Mozilla and Microsoft are interested in helping 23:29:37 [fantasai] plh: We need to organize and provide a venue for the community to come together 23:29:53 [fantasai] Dianna: I propose that Universities are a great source intelligence and creativity and might be able to help 23:30:06 [fantasai] Chris (MSFT): There is a test suite alias in the HTMLWG 23:30:26 [fantasai] plh: Yes, we also want cross-tech testing 23:30:37 [fantasai] Kevin Marks: Do you know the test suite for .. ? 23:30:42 [Arron] s/Chris/Kris 23:30:56 [fantasai] plh: I only mentioned testing framework. There are plenty of efforts out there 23:31:01 [r12a-nb] s/.. ?/called doctype at Google/ 23:31:05 [fantasai] plh: One thing I did in August was to collect some of that 23:31:15 [fantasai] plh: We are not alone, there are a lot of others trying to solve the same problem 23:31:25 [fantasai] IanJ: Ok, we have 4 more speakers after the break 23:31:32 [fantasai] IanJ: I'll hand over to Tim for now 23:31:46 [fantasai] TimBL: Thanks for coming 23:31:56 [karl] there was 23:32:02 [fantasai] Tim: It's importat that everyone designing specs is in contact with lots and lots of people using their specs 23:32:10 [karl] and now 23:32:13 [fantasai] Tim: Good to have feedback, and feedback on how to get feedback. 23:32:33 [IanJ] rrsagent, make minutes 23:32:33 [RRSAgent] I have made the request to generate IanJ 23:42:00 [LeslieB] LeslieB has joined #w3cdev 23:43:49 [ademoissac] ademoissac has joined #w3cdev 23:47:36 [caribou] RRSagent, this meeting spans midnight 23:52:49 [nathany] nathany has joined #w3cdev 23:56:26 [LeslieB] LeslieB has joined #w3cdev 00:02:45 [smfr] smfr has joined #w3cdev 00:03:59 [Arron] Arron has joined #w3cdev 00:04:08 [timeless] ScribeNick: timeless 00:04:11 [timeless] Scribe: timeless 00:04:16 [marie] marie has joined #w3cdev 00:04:17 [timeless] IanJ: Next speaker, Brendan Eich, representing ECMA 00:04:27 [timeless] ... and ECMA harmony 00:04:37 [timeless] brendan: I'm here from Mozilla 00:04:46 [timeless] ... I'm here to talk about ECMA Harmony 00:04:57 [timeless] ... which is a ... which we reached last summer 00:05:06 [timeless] ... before that, we had problems 00:05:14 [timeless] ... the figure of XX ... 00:05:31 [timeless] ... identified as gandalf ... 00:06:00 [timeless] ... there were people like Doug and sometimes me 00:06:05 [timeless] ... advocating for JS Hobbits 00:06:13 [timeless] ... small enough and based on principals from Scheme itself 00:06:14 [darobin] darobin has joined #w3cdev 00:06:26 [timeless] ... it had virtues which were only discovered years later on the web 00:06:28 [pjsg] pjsg has joined #w3cdev 00:06:33 [timeless] ... it was the dumb kid brother to Java 00:06:43 [timeless] ... JavaScript was supposed to be the duct tape language 00:06:57 [timeless] ... you were supposed to write it around the real language .. Java 00:07:07 [timeless] ... I think people will agree that Java is basically dead on the client 00:07:13 [timeless] ... there were problems with Java 00:07:23 [timeless] ... the main issues was that JavaScript was a dynamic language 00:07:49 [timeless] ... is a dynamic language, and will continue to be a dynamic language 00:07:51 [timeless] ... the fear with ECMAScript 4 (?) 00:07:56 [timeless] ... was that it would become a static language 00:08:04 [timeless] ... the fear, as with Visual Basic 7 00:08:13 [timeless] ... was that you take a dynamic language 00:08:19 [timeless] ... and you convert it into a large proof system 00:08:26 [mib_bmpvrc] mib_bmpvrc has joined #w3cdev 00:08:30 [timeless] ... and that's not how languages are built 00:08:47 [timeless] ... if ES4 would have been that, i'd be that guy with Gandalf 00:08:57 [timeless] ... there was a point in 2006 where the committee seemed united 00:09:11 [timeless] ... the MS representative was going to put some old version into JScript.net 00:09:16 [timeless] ... and we were all united 00:09:26 [timeless] [ Slide: The Fellowship of JS ] 00:09:36 [timeless] ... the fellowship was broken 00:09:41 [timeless] [ Slide: Conflict (2007) ] 00:09:53 [timeless] ... some of it was based on the real prospect that i was somehow working toward 00:10:06 [timeless] ... of pulling the drive language of flash, actionscript, into the web 00:10:22 [timeless] ... and again, microsoft was working on pulling a version into JScript.net 00:10:34 [timeless] ... based on waldemar horwat 00:10:41 [timeless] ... ECMA requires consensus 00:10:45 [timeless] ... and we didn't have that 00:10:49 [timeless] ... at the time this happened in march 00:11:01 [tantek] tantek has joined #w3cdev 00:11:04 [timeless] ... it was clear to me that this wasn't going to work, someone was going to win, and someone was going to lose 00:11:10 [timeless] ... but this was going to be ok 00:11:27 [timeless] ... because it would involve improvements to the language for the web (?) 00:11:40 [timeless] ... ecma was stagnating 00:11:48 [timeless] ... 4th edition was mothballed in 2003 00:12:12 [timeless] ... netscape was dying - partially because of its own failings, and partly because of microsoft (see antitrust) 00:12:18 [timeless] ... msie was sleeping 00:12:40 [timeless] ... in 200x (?) ... there was a chance of things improving 00:12:46 [timeless] ... in April 2007, there were things like 00:12:55 [timeless] ... Microsoft's Silverlight offering 00:13:17 [timeless] ... a JScript Chess demo was converted into C# 00:13:22 [timeless] ... it was 100s of times faster 00:13:36 [timeless] [ Slide: The Two Towers ] 00:13:39 [timeless] * ES4 00:13:57 [timeless] * Waldemar Horwat's work at Netscape, 1999-2003 00:14:03 [timeless] * JScript.NET, 2000 00:14:12 [timeless] * ActionScript 3, 2005-2006 00:14:14 [timeless] ---- 00:14:19 [timeless] * ES3.1 00:14:23 [timeless] * Dougt's recommendations 00:14:34 [timeless] s/Dougt/Doug/ 00:14:43 [timeless] * Document JScript deviations 00:14:49 [timeless] brendan: ... 00:15:01 [timeless] ... there were a lot of bugs in IE's implementation of JavaScript 00:15:17 [timeless] ... and MS was heavily involved in the standard writing for ECMAscript 2/3 00:15:27 [timeless] ... and there were serious bugs in the MS implementation 00:15:41 [timeless] * "No new syntax" 00:15:48 [timeless] brendan: ... 00:16:07 [timeless] ... if you never add things, you can't break things 00:16:15 [KevinMarks] KevinMarks has joined #w3cdev 00:16:25 [timeless] ... if you aren't careful, and you add global objects/methods 00:16:28 [timeless] ... you can break the web 00:16:30 [timeless] 00:16:42 [timeless] ... no new syntax doesn't save you 00:16:50 [timeless] ... time was passing, we were trying to get es4 out 00:16:58 [timeless] [ Slide: Synthesis (2008) ] 00:17:02 [timeless] brendan: .... 00:17:10 [dbaron] dbaron has joined #w3cdev 00:17:18 [timeless] ... Allen proposed meta object API 00:17:25 [timeless] ... on the ES3.1 side 00:17:47 [timeless] ... Lars Hansen on the ES4 side, "Packages must go" 00:17:51 [timeless] ... in April 2008 00:17:53 [timeless] ... Namespaces must go too (June-July) 00:17:58 [timeless] ... unfortunately, we lost Adobe 00:18:20 [timeless] ... because they felt they lost the bits they had derived from the standard 00:18:28 [timeless] ... but that's a risk when working on standards 00:18:36 [timeless] ... when we reached harmony in July in Oslo 00:18:41 [jun] jun has joined #w3cdev 00:18:49 [timeless] ... the language again is inspired by Scheme with influences from Self 00:18:56 [timeless] ... one of the foundations of Scheme is lexical scope 00:19:04 [timeless] ... javascript has some broken bits of scope 00:19:18 [timeless] ... Doug's teaching and attitude 00:19:30 [timeless] ... in es4 we're looking toward a strict mode 00:19:42 [timeless] ... we have a hope for "use strict mode" for es5 00:19:50 [timeless] ... similar to perl5 00:20:04 [timeless] ... we're trying to avoid "use stricter" for future versions 00:20:13 [timeless] ... that's my quick recap of how we reached harmony 00:20:22 [timeless] .... ES3.1 was renamed ES5 (March 2009) 00:20:36 [timeless] ... We decided not to trouble ECMA with fractional standard version numbering 00:20:39 [timeless] ... ES4 died 00:20:59 [timeless] ... we're not sure if Harmony will really be 5 00:21:09 [timeless] ... we might need to do some quick versions for standardization reasons 00:21:17 [timeless] ... the committee is not the gatekeeper 00:21:25 [timeless] ... a chokepoint for all innovation 00:21:38 [timeless] [ Slide: Namespaces ] 00:21:43 [timeless] brendan: ... 00:21:52 [timeless] ... who here knows about Namespaces in Flash? 00:21:57 [timeless] [ hands raised ] 00:22:07 [timeless] ... there's ECMAScript For XML (e4x) 00:22:15 [timeless] ... it has a lot of problems as a spec IMO 00:22:28 [timeless] ... it's a spec whose pseudo code was extracted from java code 00:22:43 [timeless] ... so you have bugs from the code, translation, etc. 00:22:52 [timeless] ... it almost broke the object model 00:22:58 [timeless] ... it had integrated query 00:23:05 [timeless] ... it had namespace objects 00:23:25 [timeless] ... you had to use ::'s to qualify stuff 00:23:37 [timeless] ... sometimes people complain about namespaces in XML documents 00:23:48 [timeless] ... es4 was much worse 00:24:00 [timeless] ... it was very powerful 00:24:20 [timeless] ... because you could use lexical scope to change how code behaves 00:24:24 [timeless] [ Slide: Packages ] 00:24:30 [timeless] ... packages are built on namespaces 00:24:42 [timeless] ... even today in actionscript, there are some funny things about them 00:24:58 [timeless] ... there's a temptation to think that using long chains of dotted things 00:25:17 [timeless] ... there's a temptation to think that the dotted things can win 00:25:25 [timeless] ... but because the language is dynamic 00:25:43 [timeless] ... the winner can be the normal object with similar property paths 00:25:53 [timeless] ... I think this problem still exists in actionscript 00:26:01 [timeless] ... and then there are problems with <script> tags 00:26:07 [timeless] [Slide: NAmespace Problems] 00:26:22 [timeless] ... here's a problem with namespaces 00:26:36 [timeless] s/NAmespace/Namespace/ 00:27:02 [timeless] ... ambiguities can happen when scripts come along and defined a namespace later 00:27:14 [timeless] ... I'm explaining why Namespaces died in ES4 00:27:37 [TabAtkins] TabAtkins has joined #w3cdev 00:27:49 [timeless] ... Question: Why am I talking about why Namespaces when you already said it's dead 00:27:54 [timeless] ... Answer: it died because it had technical problems 00:28:00 [timeless] ... that we couldn't figure out how to solve 00:28:16 [timeless] ... the alternative was to require whole-porgram analysis 00:28:25 [timeless] [ Slide: ES5 Meta-programming ] 00:28:33 [timeless] ... the 3.1 contribution 00:28:45 [timeless] ... Create properties with getters and setters 00:28:57 [timeless] ... we have this in mozilla under a different name 00:29:01 [timeless] ... it's finally standardized 00:29:22 [timeless] ... instead of __defineGetter__/__defineSetter__/__lookupGetter__/__lookupSetter__ 00:29:39 [timeless] ... We implemented this about 10 years ago in Mozilla 00:29:49 [timeless] ... but MS/Opera didn't implement it 00:29:56 [timeless] ... when live maps launched 00:30:06 [timeless] ... it treated the dom as the ie dom 00:30:13 [dom] s/dom/DOM/ 00:30:14 [dom] s/dom/DOM/ 00:30:25 [timeless] ... and then it looked to see if the host wasn't IE 00:30:39 [timeless] ... it decided it wasn't IE, then it must be Gecko 00:30:51 [timeless] ... so it used __defineSetter__/__defineGetter__ 00:30:55 [timeless] ... to support it 00:31:05 [timeless] ... this caused a firedrill in Opera/Safari 00:31:14 [timeless] ... to implement this missing feature (within a week!) 00:31:17 [timeless] ... to support live maps 00:31:25 [timeless] [ Slide: ES5 Meta-programming ] 00:32:03 [timeless] ... you can define things that don't appear in for-in loops 00:32:15 [timeless] ... because the ajax community learned about how pollution breaks iteration 00:32:25 [timeless] ... with this facility, you can not break those things 00:32:35 [timeless] ... with this, lack of syntactic salt 00:32:38 [timeless] ... you can create objects 00:32:45 [timeless] [ Slide: Hardening Objects ] 00:32:55 [timeless] ... you can make an object that delegates to another object 00:33:01 [timeless] ... without using a constructor pattern 00:33:13 [timeless] ... you can prevent extensions, and prevent reconfig 00:33:17 [timeless] ... and prevent all writing 00:33:51 [timeless] ... this enabled a lot of what we had in ES4 classes 00:33:51 [timeless] [ Slide: Harmony Requirements ] 00:34:01 [timeless] ... as we worked on harmony, we realized we should state our requirements 00:34:05 [timeless] ... in some way 00:34:15 [timeless] ... we don't want to do anything that requires innovation in committee 00:34:20 [timeless] ... or abstract jumps 00:34:29 [timeless] ... we want to keep the language pleasant for casual developers 00:34:36 [timeless] ... so you could start small and grow 00:34:41 [timeless] ... javascript is in some ways a virus 00:34:48 [timeless] ... it has grown into an application programming language 00:35:01 [timeless] ... we want to keep these features 00:35:06 [timeless] [ Slide: Harmony Goals ] 00:35:17 [timeless] ... * Be a better language for writing 00:35:29 [timeless] ... [] complex applications 00:35:38 [timeless] ... [] libraries (possibly including the DOM!) 00:35:43 [timeless] ... [] code generators 00:35:53 [timeless] ... * Switch to a testable specification 00:36:03 [timeless] ... * Improve interoperation, adopt de facto standards 00:36:15 [timeless] ... * Keep versioning as simple and linear as possible 00:36:21 [timeless] ... * A statically verifiable ... 00:36:29 [timeless] [ Slide: Harmony Means ] 00:36:35 [timeless] ... * Minimize additional semantic state 00:36:44 [timeless] ... * Provide syntactic conveniences for: 00:36:49 [timeless] ... [] good abstraction patterns 00:36:56 [timeless] ... [] hide integrity patterns 00:37:09 [timeless] ... [] define desugaring to kernel semantics 00:37:25 [timeless] ... * Remove (via opt-in versioning or pragmas) confusing or troublesome constructs 00:37:27 [brutzman] brutzman has joined #w3cdev 00:37:32 [timeless] ... * Build Harmony on ES5 strict mode 00:37:49 [timeless] ... * Support virtualizability for host objects 00:37:51 [timeless] [ Slide: Harmony Proposals ] 00:38:03 [timeless] brendan: ... 00:38:11 [timeless] ... prognosis, it should be sorted out in 2-3 years 00:38:21 [timeless] ... things that don't make it go toward es6 00:38:33 [timeless] ... you can self host your way to a stronger language 00:38:59 [timeless] ... ECMA standards group TC39 is still strong 00:39:05 [timeless] IanJ: thank you brendan 00:39:14 [timeless] ... that was our transition talk 00:39:20 [timeless] ... into Internet Ecosystem 00:39:24 [timeless] ... next speaker ... 00:39:29 [timeless] Mark Davis: ... 00:39:48 [timeless] ... I'm talking about international domain names (IDNA ?) 00:39:52 [timeless] ... IDNA 2003 00:40:06 [timeless] ... There was a system developed in 2003 that allowed people to have international characters in domain names 00:40:13 [timeless] ... I don't know if people saw the news this week 00:40:26 [timeless] ... What happened this week is that the top level domains can have non ascii characters 00:40:30 [Ray] Ray has joined #w3cdev 00:40:35 [timeless] [ Slide: IDNA 2003 ] 00:40:40 [tlr] tlr has joined #w3cdev 00:41:01 [timeless] ... you can't do certain things... 00:41:12 [timeless] ... IDNA 2003 is tied to Unicode 2.3 (?) 00:41:21 [timeless] ... If you look at the uppercase characters 00:41:30 [timeless] ... they're mapped to lowercase characters before they reach dns 00:41:48 [timeless] ... O with two dots is converted to a lowercase version before it gets sent out 00:42:01 [timeless] ... it gets converted with an ascii binding with something called punicode 00:42:08 [timeless] s/puni/puny/ 00:42:36 [timeless] [ Slide: IDNA 2008 ] 00:42:55 [timeless] ... about 3 years ago there was an effort to revise it 00:43:05 [timeless] ... it updates to the latest version of unicode 00:43:13 [timeless] ... and makes the system unicode version independent 00:43:19 [timeless] ... but it invalidates certain urls 00:43:45 [timeless] ... uppercase letters are invalid 00:44:04 [timeless] ... it removes a class of symbols and punctuation characters 00:44:23 [timeless] ... and it makes certain classes of characters not equivalent to other expansions 00:44:37 [timeless] ... IDNA 2008 is not yet approved 00:44:44 [timeless] [ Slide: ISSUES ] 00:44:52 [timeless] ... this causes problems for browser vendors 00:45:10 [timeless] ... which need to retain compatibility with pages using IDNA2003 00:45:16 [timeless] ... need to match user expectations 00:45:23 [timeless] ... it causes problems for search engine vendors 00:45:29 [timeless] ... need to match old and new browsers 00:45:41 [timeless] ... need to match old and new expectations 00:45:51 [timeless] [ Slide: UTS46 - Compatibility "Bridge" ] 00:46:07 [timeless] ... It enables everything that was allowed in 2003 with the same behavior 00:46:19 [timeless] ... it allows the new stuff allowed from 2008 00:46:29 [timeless] ... it has different things for lookup/display 00:46:47 [timeless] display: ß, lookup: ss 00:46:55 [timeless] [ Slide: Compatibility for Transition ] 00:47:01 [timeless] ... aimed at client SW, not registries 00:47:08 [timeless] ... alows client SW to handle both 2003 and 2008 00:47:19 [timeless] ... consensus from browsers and search 00:47:31 [timeless] ... I'll send the slides to IanJ 00:47:35 [timeless] IanJ: thank you 00:47:38 [timeless] [applause] 00:47:50 [timeless] IanJ: thank you for your good work at unicode 00:47:56 [timeless] ... you mentioned hot controversies 00:48:04 [timeless] Mark Davis: ... 00:48:13 [timeless] ... there are controversies 00:48:22 [timeless] ... I'll introduce Eric Vanderpool (?) 00:48:31 [timeless] ... Michelle Suighard 00:48:39 [timeless] ... one of the coauthors of the IRI spec 00:48:51 [timeless] ... a key issue is the compat difference between the 2003 and 2008 spec 00:48:59 [timeless] ... we've been trying to walk a delicate line 00:49:08 [timeless] ... while not trying to stomp on the IETF toes 00:49:12 [timeless] ... because it's their spec 00:49:25 [timeless] Diane: 00:49:28 [timeless] Diane: ... 00:49:50 [timeless] ... If I own one, name sparkasse-gießen.de 00:50:05 [timeless] ... can you squat on sparkasse-giessen.de 00:50:41 [timeless] Mark Davis: ... 00:50:47 [timeless] ... you can't reserve 'sparkasse-gießen.de' 00:51:00 [timeless] ... it's like case 00:51:28 [timeless] Doug: ... 00:52:07 [timeless] ... about the heart case (I❤NY.blogspot.com) 00:52:30 [timeless] Mark Davis: That will resolve to an all lowercase version 00:52:46 [timeless] ... If you were using a browser that implemented IDNA 2008 strictly 00:52:51 [timeless] ... it will fail 00:53:07 [timeless] questioner: ... 00:53:11 [timeless] ... so the issue about uppercase 00:53:23 [timeless] ... does that mean that you can't type? 00:53:28 [timeless] Mark Davis: no... 00:53:36 [timeless] ... it's limited to IDN cases 00:53:40 [dom] (♥NY.blogspot.com/ resolves to in my FF3.5) 00:53:44 [timeless] Doug: what's the goal in not making it work? 00:53:55 [timeless] Mark Davis: that's part of the controversy 00:54:06 [smfr] Safari goes to♥ny.blogspot.com/ 00:54:09 [timeless] ... it was bad to show something that was other than what was being resolved to 00:54:44 [timeless] Robin: so what's with the heart? 00:54:59 [timeless] Mark Davis: well symbols and punctuation look to close to other things 00:55:07 [timeless] ... We dropped ~3000 such characters 00:55:34 [timeless] ... We dropped ~4000 characters relating to ÖBB.at 00:55:49 [timeless] ... For a lot of us, this didn't really solve the problem 00:56:01 [timeless] Doug: so it doesn't limit you to non mix ranged characters? 00:56:07 [timeless] Mark Davis: ... 00:56:19 [timeless] ... there are a number of guidelines in Unicode 36 00:56:35 [timeless] ... the problem is that there are a number of cases where it's needed and customary 00:56:41 [timeless] ... such as mixing japanese and latin 00:56:52 [timeless] ... and distinguishing legitimate from others 00:56:59 [timeless] ... and over time, browsers are solving the problems 00:57:17 [timeless] Ronan: ... 00:57:37 [timeless] ... regexps that pigeon parses urls ... will that break? 00:58:01 [timeless] Mark Davis: ... you need to replace the dot with the character class 00:58:16 [timeless] Ronan: are you familiar with the ipv6 handling in urlbars 00:58:24 [timeless] ... and how long it took before it was implemented? 00:58:37 [timeless] Mark Davis: most stuff users do should work 00:58:49 [timeless] ... but sure the servers will break and have probably been broken since 2003 00:59:10 [timeless] xx-3: ... 00:59:14 [timeless] ... this is at which level? 00:59:32 [timeless] Mark Davis: this is all handled at the browser level 00:59:38 [IanJ] s/xx-3/Tom 00:59:44 [timeless] ... punycode ... was adam costello's pun 00:59:48 [IanJ] s/Ronan/Rohit 01:00:05 [IanJ] rrsagent, make minutes 01:00:05 [RRSAgent] I have made the request to generate IanJ 01:00:05 [timeless] ... as far as dns is concerned, it's all xn--... 01:00:17 [timeless] ... the routing/dns system doesn't see this 01:00:30 [timeless] ... the browsers basically get to agree with this standard 01:00:32 [timeless] Richard: ... 01:00:32 [IanJ] [Richard Ishida] 01:00:37 [timeless] ... what if i have a heart 01:00:56 [timeless] ... what you've been describing is something which does a transformation of these strings 01:01:06 [timeless] ... if i understand this correctly 01:01:11 [timeless] ... you will continue to use this 01:01:34 [shepazu] shepazu has joined #w3cdev 01:01:34 [timeless] ... we've been working with all the browser representatives and search engine companies to handle this 01:01:44 [timeless] xx-4: from hp 01:01:56 [timeless] ... you've alluded to phishing attacks 01:02:02 [timeless] ... what's the status 01:02:15 [timeless] Mark Davis: ... 01:02:22 [timeless] ... everyone has some approach for dealing with this 01:02:25 [timeless] ... but it isn't consistent 01:02:38 [timeless] ... I think it's a bad idea to standardize too early 01:02:51 [timeless] ... there are a lot of holes before we come up with a cohesive solution 01:02:57 [timeless] IanJ: thank you 01:03:00 [timeless] [applause] 01:03:12 [timeless] IanJ: Next Leslie ... @ ISOC 01:03:29 [timeless] Leslie: ... talk/pres/discuss/... 01:03:39 [timeless] [ Slide: The Internet - Evolution and Opportunity ] 01:03:45 [timeless] [ Slide: The Internet Society ] 01:03:53 [timeless] ... Founded in 1992 01:03:59 [timeless] ... 100 members, 90 chapters ... 01:04:12 [timeless] ... promoting/sustatining internet as a platform for innovation 01:04:53 [timeless] [ Slide: Internet Evolution ] 01:05:07 [timeless] s/sustatining/sustaining/ 01:05:21 [timeless] ... Incremental changes 01:05:26 [timeless] ... seven layers 01:05:30 [timeless] ... independent building blocks 01:05:38 [rohit] rohit has joined #w3cdev 01:05:41 [timeless] ... flexible + responsive 01:05:52 [timeless] ... impossible to nail up a global deployment plan 01:06:02 [timeless] [ Slide: An external pressure ... IP addresses ] 01:06:08 [timeless] ... Running out of IPv4 addresses 01:06:25 [timeless] ... last allocation from IANA predicted for Oct 2011 01:06:38 [rohit] apologies for interrupting the scribe, but I wanted to share a link for the dumb-app-guy question I asked earlier: 01:06:41 [timeless] ... last allocation to ISP anywhere, predicted for Feb 2013 01:06:43 [Arron] Arron has joined #w3cdev 01:06:56 [timeless] ... Lots of IPv6 addresses 01:07:04 [rohit] -- an example of sw currently broken and the very first request from the devs was for a regex :) 01:07:12 [timeless] ... it's not going to be an ipv6 internet before the last ipv4 address is handed out 01:07:17 [rohit] viz "valid_url() marks correct IDN domains as invalid" 01:07:18 [timeless] ... more NATs 01:07:49 [timeless] [ Slide: Implications above the IP Addressing Layer ] 01:07:51 [timeless] ... IPAffinity breaks! 01:08:10 [timeless] ... a recent roundtable of industry leaders we held included reps from Google, Yahoo, Akamai, Netflix, and Comcast 01:08:29 [timeless] ... discussed impending impact on geolocation, geoproximity, management of distribution of copyrighted materials 01:08:48 [timeless] .... 01:08:57 [timeless] ... Multiple open streams breaks! 01:09:07 [timeless] ... sharing addresses => fewer ports => ajax apps have troubles 01:09:15 [timeless] ... poor performance of web pages, e.g. Google maps 01:09:56 [timeless] ... users see google maps tiling in slowly 01:10:07 [timeless] ... users don't blame the network, they blame the server 01:10:21 [timeless] [ Slide: Responses to the IP addressing situation] 01:10:36 [timeless] ... major isps and content providers are including ipv6 in their current deployment plans 01:10:44 [timeless] ... wireless boradband LTE has IPv6 baked in 01:11:00 [timeless] [ Slide: Opportunities in the IP Addressing Situation ] 01:11:13 [timeless] ... make sure your web servers are ipv6 capable 01:11:19 [timeless] ... don't write ipversion specific apps 01:11:38 [timeless] ... with ipv6 you can imagine a world where everything is uniquely addressable 01:11:49 [timeless] ... example of problems ... 01:11:55 [timeless] ... Opera tries to outsmart OS 01:12:10 [timeless] ... if it finds ipv6 address it will use it 01:12:20 [timeless] ... whereas vista might know to fail over to an ipv4 tunnel 01:12:32 [timeless] ... but it can't because opera didn't let it decide 01:13:15 [timeless] [ Slide: Another external pressure - Unwanted Traffic ] 01:13:26 [timeless] [ Slide: Responses to Unwanted Traffic ] 01:13:36 [timeless] 01:13:45 [timeless] ... no final conclusion 01:13:52 [timeless] [ Slide: Alternatives? ] 01:14:05 [timeless] ... top down imposition of security doesn't fit the Internet 01:14:15 [timeless] ... the internet is a "network of networks" 01:14:39 [TabAtkins_] TabAtkins_ has joined #w3cdev 01:14:44 [timeless] [ Slide: Security Tools Must Address Total Threat Model ] 01:15:11 [timeless] [ Slide: Different Security Mechanisms Are Needed for Different Threats ] 01:15:34 [timeless] [ Slide: Too Much Security Technology Impedes Usage, without Reducing Bad Behavior ] 01:15:37 [timeless] [ laughter ] 01:15:44 [timeless] [ Slide: One building block: DNSSEC ] 01:15:51 [timeless] ... secure results from domain name servers 01:16:07 [timeless] ... so you can be sure whatever you get back from dns is what the dns server intended to send you 01:16:14 [timeless] ... tamper proof packaging on dns responses 01:16:22 [timeless] ... this doesn't prevent phishing 01:16:47 [timeless] ... it doesn't encrypt the data in the response 01:16:55 [timeless] [ Slide: DNSSEC opportunities ] 01:17:06 [timeless] ... with DNSSEC you have a better platform for confidence in dns 01:17:14 [timeless] ... dnssec is deploying rather slowly 01:17:32 [timeless] ... i've referred to these in other contexts as broccoli techonologies 01:17:49 [timeless] ... you should eat it, but it's better with a cheese sauce 01:17:51 [timeless] ... but there's no cheese sauce 01:17:54 [timeless] [applause] 01:18:06 [timeless] Phillip (Cisco): ... 01:18:20 [timeless] ... when do you think ISPs will deliver IPv6 connectivity? 01:18:39 [timeless] Leslie: ... 01:18:51 [timeless] ... some soon in Europe, and a few maybe in the US 01:19:06 [timeless] ... I think it's fair to say of the service providers thinking about it 01:19:18 [timeless] ... they will have it deployed before *they* run out 01:19:37 [timeless] ... this is slightly better than before when it was like "yeah, we have it in our research lab" 01:19:44 [timeless] Tom: you mentioned an ISP 01:19:50 [timeless] ... what specific one? 01:20:03 [timeless] Leslie: of the list I have here, Comcast is the access one 01:20:12 [timeless] Mark: Have you seen a hoarding of ipv4 addresses? 01:20:28 [timeless] Leslie: I think some have retirement plans by auctioning them off 01:21:04 [IanJ] rrsagent, make minutes 01:21:04 [RRSAgent] I have made the request to generate IanJ 01:21:06 [timeless] ... In principal the regional providers have fairly strict releases of addresses 01:21:12 [timeless] Leslie: the problem is that we're going to run out 01:21:21 [timeless] Doug: ... 01:21:25 [timeless] ... I asked about a tutorial 01:21:30 [timeless] ... I still think you need a tutorial 01:21:34 [timeless] ... with a tweetable domain name 01:21:38 [timeless] Leslie: yeah, that'd be great 01:21:39 [timeless] [laughter] 01:21:53 [timeless] ... part of the challenge is that everyone's problem is different 01:22:02 [timeless] ... at some point, we'll figure out the commonalities 01:22:15 [timeless] ... we're trying to get some of the ones who have done it in a room with some who haven't 01:22:19 [timeless] ... so they can share knowledge 01:22:24 [timeless] dbaron: David Baron, Mozilla 01:22:31 [timeless] ... in what would you like to do with DNSSEC 01:22:40 [timeless] ... none of this is speaking for mozilla 01:22:49 [timeless] ... there are certain things i'd like to be able to do with dnssec 01:22:58 [timeless] ... among them is putting public keys in dns 01:22:59 [shepazu] put i18n urls in Acid4 :) 01:23:01 [timeless] ... to avoid using a CA 01:23:16 [timeless] ... another is putting email autoconfig information into dns 01:23:22 [timeless] ... another is to put a default https flag 01:23:37 [timeless] ... to say "foo.com" should go to " " instead of " " 01:23:43 [timeless] Leslie: thanks for thinking about the questions 01:23:49 [timeless] ... in terms of where to go with them 01:23:55 [timeless] ... some of them are pursued within IETF 01:24:10 [timeless] ... particularly, some levels of email, e.g. domain keys 01:24:34 [timeless] ... to say "this is the server allowed to send email for this domain" 01:24:43 [timeless] ... so the IETF is the right place to go for a lot of this 01:24:47 [timeless] Kevin: ... 01:24:56 [timeless] ... what would i like to do if i had lots of addressable points 01:25:02 [timeless] ... i'd like to setup servers on my own machines 01:25:05 [timeless] ... without proxies 01:25:13 [timeless] Leslie: yes 01:25:25 [timeless] ... it'd be good if people would stand up and say that loudly 01:25:34 [timeless] Kevin: we've seen this problem with real time flow 01:25:39 [timeless] Diana: ... 01:25:46 [timeless] ... what would i do if i had lots of addresses 01:25:51 [timeless] ... what if i was a washing machine 01:26:01 [timeless] ... what if i was an animal owner 01:26:06 [timeless] ... and i put a chip in each one 01:26:10 [timeless] ... or a hospital owner 01:26:16 [timeless] ... and i wanted chips in patients 01:26:25 [timeless] ... i think ... 01:26:43 [timeless] ... I think we'll run out of ipv6 within 10 years 01:26:55 [timeless] Tom: who is the definitive place for ipv6 01:26:59 [timeless] Leslie: all of them 01:27:05 [timeless] IanJ: thanks Leslie 01:27:09 [timeless] [applause] 01:27:24 [timeless] ScribeNick: IanJ 01:27:33 [timeless] RRSAgent: make minutes 01:27:33 [RRSAgent] I have made the request to generate timeless 01:28:41 [IanJ] Speaker: Kevin Marks on OpenID, etc. 01:29:01 [TabAtkins_] TabAtkins_ has joined #w3cdev 01:29:29 [IanJ] ...open social web standards 01:29:30 [dom] [note that JessyInk provides similar effects as Prezi in SVG] 01:29:57 [IanJ] KevinMarks: How I got to this point. 01:30:04 [IanJ] ...the problem is Babel 01:30:18 [IanJ] ...see the "Map of Online communities and related points of interest" 01:30:48 [IanJ] (one example: ) 01:31:18 [IanJ] KevinMarks: Histogram your users...people use 12345 and 90210 when they are lying to you :) 01:31:23 [timeless] 01:31:34 [IanJ] KevinMarks: You have to give people a reason to @@ 01:31:46 [IanJ] KevinMarks: Open social builds on existing infrastructure 01:31:59 [tantek] tantek has joined #w3cdev 01:32:10 [IanJ] KevinMarks: Defining an API for your favorite programming langauge...as long as it's javascript. 01:32:20 [timeless] s/langauge/language/ 01:32:28 [IanJ] Open social v0.89 enabled new client and programming mobdels by adding server to server protocols. 01:32:32 [IanJ] s/89/8 01:32:34 [caribou] [original pic at ] 01:32:38 [timeless] s/mobdels/models/ 01:32:53 [IanJ] KevinMarks: Over 1 billion users of open social stuff. 01:33:06 [IanJ] KevinMarks: developing REST APIs. 01:33:20 [IanJ] s/users/accounts (?) 01:33:39 [IanJ] KevinMarks: Challenge is to identify "me"...people accustomed to identify selves via HTTP URIs 01:33:45 [dom] s/accounts (?)/users 01:34:00 [IanJ] KevinMarks: WebFinger(email) -> URI 01:34:14 [IanJ] KevinMarks: Next think you want to know is "my friends" 01:34:29 [IanJ] KevinMarks: Portable contacts....bcard + some useful fields used by most of the social networks. 01:34:56 [smfr] vcard 01:34:59 [IanJ] s/bcard/vcatrd 01:35:03 [IanJ] s/vcatrd/vcard 01:35:12 [IanJ] KevinMarks: what we do....(photos, etc.) 01:35:24 [IanJ] KevinMarks: the model underneath that is "feeds" but those were designed for blogs. 01:35:37 [IanJ] KevinMarks: Activity Streams codify "actions' (that were not part of feed design) 01:36:01 [IanJ] KevinMarks: Notion of "flow" ...atom pub (posting; and equivalent JSON APIs)...and newer: pubsubhubbub 01:36:08 [IanJ] ...a way to get around the feed polling problem. 01:36:20 [IanJ] ...you don't check the feed ever N cycles...you get a post when the feed changes. 01:36:43 [IanJ] ...Salmon builds on those ideas....codifies "going back up the stream and down again" 01:36:56 [IanJ] KevinMarks: A big chunk of the challenge is to get delegated login. 01:37:00 [IanJ] ...didn't get you that much... 01:37:09 [IanJ] ...not much improvement to actual user experience. 01:37:23 [IanJ] ...but now we have more to help solve form-filling problem. 01:37:48 [IanJ] ...you can make a business case now for using the APIs rather than creating yet-another-UI 01:38:08 [IanJ] ...we are starting to see this implemented 01:38:26 [IanJ] ...you can delegate your logins to the site...will go to site and get not just user identify, but richer identity as well. 01:38:43 [IanJ] ...not quite convergence, but we are trying to pull them together (from different site approaches) 01:38:52 [IanJ] ...OAuth is a way of issuing tokens. 01:39:17 [IanJ] ...you do an HTTP request; knows who you logged in as and your creds; gives you back things you have right to. 01:39:32 [IanJ] ...replaces cookies; state management doing in RESTful fashion 01:39:43 [IanJ] ...google and yahoo offer this for all their services; twitter likely to as well 01:40:07 [IanJ] ...empirical standards (as we experienced with microformats) 01:40:15 [IanJ] ...focused on agreement rather than completeness. 01:41:04 [IanJ] ..."t-shirt not a suit" 01:41:16 [IanJ] ..."good enough standard" 01:41:32 [IanJ] ...example of portable contacts. 01:41:41 [IanJ] ...we looked at social networks and what they have in common. 01:42:15 [IanJ] ...activity stream stuff...we have enough social network sites...what actions are they taking that is common enough to build a vocabulary 01:42:36 [IanJ] [end of overview of the space] 01:42:53 [IanJ] ...ad hoc realm. 01:43:14 [IanJ] IJ: Have you taken some to IETF? 01:43:16 [IanJ] KevinMarks: Yes. 01:43:28 [IanJ] KevinMarks: We've set up foundations...but then created OWF. 01:43:57 [IanJ] KevinMarks: as a foundation factory...to do all the legal stuff that you have to do...so you could use this in other places...model was the apache foundation...but to do for specs what apache does for code. 01:44:53 [IanJ] ...I've worked in video standards before...didn't seem in these cases to have the same patent thicket. 01:45:18 [IanJ] Dan Appelquist (Vodafone): How would you compare this approach to integrating social networks to one based on XMPP? 01:45:27 [IanJ] KevinMarks: Bits and pieces around that. 01:45:34 [IanJ] ...there's some overlap and some you can bridge through. 01:45:48 [IanJ] ...a lot of this came from open social experience...and part was moving through their comfort zone. 01:46:41 [IanJ] ...there's nothing stopping you from sending this over XMPP (as transport) 01:46:42 [IanJ] DanA: I ask because I have heard a view expressed -- isn't all of this retrofitting onto existing web sites something that could be done with a different approach? 01:46:54 [IanJ] KevinMarks: pubsubhubbub stuff closest to xmpp... 01:47:17 [IanJ] ..there's some similarity, but a lot was about web developers writing web stuff....but that is changing... 01:47:22 [IanJ] ...I think a lot comes down to tastes. 01:47:49 [IanJ] rrsagent, make minutes 01:47:49 [RRSAgent] I have made the request to generate IanJ 01:48:12 [IanJ] KevinMarks: You can build bridges...there are also cultural tastes among programmers. 01:48:30 [IanJ] ...for some, dynamic programming languages not scary, for others it may be. 01:48:52 [IanJ] DanA: In the social web incubator group meeting we held this week, we spent a lot of time talking about user stories 01:49:09 [IanJ] ....I'm a user on one social network; I want to create a friend connection to someone on another social network. 01:49:18 [IanJ] ...how would you do that? 01:49:33 [IanJ] KevinMarks: When we defined open social, it was with one site in mind. 01:49:50 [IanJ] ...but we are now at the point where it's becoming more important....xfn in microformats. 01:49:57 [IanJ] ...that works like crawling foaf works. 01:50:26 [IanJ] ...many sites have mixes of public and private...you can't just use a crawler over public data. 01:50:30 [tantek] crawling XFN works like that today, using HTML <a href> today 01:50:33 [IanJ] ...you need to be able to provide access control. 01:50:44 [tantek] OAuth provides the access control for private data 01:50:47 [IanJ] ..there are still some issues sorting out assertions from multiple parties. 01:51:01 [IanJ] ...there may be some bindings I can make that you may not want to become public. 01:51:30 [IanJ] ...we've punted on some of the stickiness...we addressed some issues first (such as "no more forms asking for personal data") 01:51:41 [IanJ] ...the delegation part becomes important. 01:51:53 [IanJ] ...about 2000 twitter apps now. 01:52:02 [IanJ] ...because you can delegate into it the list of people you are interested in. 01:52:28 [IanJ] ...we are trying to correlate patterns in various apps and get commonality. 01:52:48 [IanJ] timbl: When you want to aggregate cal info there are two ways (1) go to a site or (2) run something on your laptop that goes to fetch info. 01:53:02 [IanJ] ...if you run on your laptop you don't have delegated authentication. No site knows everything about you. 01:53:20 [IanJ] ...you don't give one site access to stuff, where another site might be confused about access boundaries. 01:53:41 [IanJ] ...how do you see competition between cals on desktops and sites going in the future? 01:53:48 [IanJ] KevinMarks: I would hope we could use the same protocols for both. 01:54:08 [IanJ] ...I can't get a remote site to call me back on my laptop..I have to open the connection first. 01:54:16 [IanJ] ...I have to do those things over a "hanging GET" from the browser. 01:54:27 [IanJ] ...rather than opening ports to listen to things 01:54:35 [IanJ] ...that militates towards going to the site. 01:54:52 [IanJ] timbl: if you are a native desktop app, you can open a port. 01:54:56 [IanJ] KevinMarks: It's a NAT problem. 01:55:02 [IanJ] (e.g., from a cafe) 01:55:15 [IanJ] KevinMarks: That's driving people to web services that feed info through. 01:55:27 [IanJ] ...services can use sms, email, other protocols. 01:56:09 [IanJ] KevinMarks: Once we can put up servers again (with ipv6), that will help. 01:56:21 [IanJ] timbl: I think a lot has been architected differently because of NAT. 01:56:45 [IanJ] KevinMarks: Bittorrent is arguably a layer that tries to game TCP. 01:57:14 [IanJ] rohit: a couple of the big open id scares (some since resolved) hover around this issue. 01:57:46 [IanJ] rrsagent, make minutes 01:57:46 [RRSAgent] I have made the request to generate IanJ 01:58:20 [IanJ] KevinMarks: A big chunk of this is constraining delegation to what is "should be." 01:58:35 [IanJ] ...may not be better, but is better than name/password and associated. 02:00:05 [IanJ] timeless++ 02:00:13 [IanJ] Topic: John Schneider on EXI 02:00:47 [marie] timeless++ 02:00:50 [marie] timeless++ 02:00:59 [IanJ] John: Efficient XML Interchange 02:01:16 [IanJ] John: Sometimes you need just the right problem to kick you to the next level of evolution. 02:01:22 [IanJ] ...web is always evolving to new places. 02:01:34 [IanJ] ...part of what EXI is meant to do is take the Web/XML to new places. 02:01:46 [KevinMarks] my prezi is at if you can pardon my Flash 02:01:46 [IanJ] ...XML has been wildly successful: communities, vendors, open source 02:01:52 [IanJ] thanks! 02:02:45 [IanJ] ...we want to make it easier to use xml in environments with bandwidth/space limitations. 02:03:12 [IanJ] ...people wanted to be able to tap into communities and tools...30 or so binary xml technologies that popped up. 02:03:22 [IanJ] ...diversity is good for evolution but not particularly good for interop. 02:04:07 [IanJ] ...created EXI WG 02:04:12 [IanJ] ...at first, nobody believed. 02:04:30 [IanJ] ...we brought experts together...9 months later and 147 pages of analysis, found one! 02:04:39 [IanJ] [EXI Benchmarks] 02:04:57 [IanJ] ...lots of other specs published at w3c...will give interop across a broad set of use cases. 02:05:10 [IanJ] ...a lot of the people behind this were the people previously competing...fracturing of marketplace going away. 02:05:16 [IanJ] ...we are looking at one good solution. 02:05:31 [IanJ] ...source is info theory and formal language theory 02:05:43 [IanJ] ...the results are great: 02:05:49 [IanJ] - bandwidth utilization 02:05:52 [IanJ] - faster processor speeds 02:05:58 [IanJ] - greater battery life 02:06:11 [IanJ] ...simultaneously optimizes a lot of things 02:06:16 [IanJ] rrsagent, make minutes 02:06:16 [RRSAgent] I have made the request to generate IanJ 02:06:36 [IanJ] ....we wanted to see how compares to compression....lots of test cases...better in every case and faster 02:06:48 [IanJ] ...if you compare to packed binary formats, it consistently beats those as well 02:06:55 [IanJ] very efficient way to share data in general. 02:07:12 [IanJ] [demo time] 02:07:29 [IanJ] real world example to send 1M data to an aircraft 02:07:35 [IanJ] With EXI was 1 second. 02:07:40 [IanJ] Without EXI 2:23 02:08:01 [IanJ] ...there is some processing time on the other end. 02:08:10 [IanJ] ...but it's not compression...you process it faster on the other end, too 02:11:12 [caribou] caribou has left #w3cdev 02:12:18 [LeslieB] LeslieB has joined #w3cdev 02:20:16 [IanJ] IanJ has joined #w3cdev 02:33:35 [Arron] Arron has joined #w3cdev 02:48:45 [timbl] timbl has joined #w3cdev 04:14:07 [rohit] rohit has joined #w3cdev 04:17:28 [midnaz] midnaz has left #w3cdev 04:25:28 [LeslieB] LeslieB has joined #w3cdev 04:34:10 [Arron] Arron has joined #w3cdev 06:11:32 [jun] jun has joined #w3cdev 06:17:26 [Kangchan] Kangchan has joined #w3cdev 06:55:40 [LeslieB] LeslieB has joined #w3cdev 07:48:23 [tantek] tantek has joined #w3cdev 07:53:35 [IanJ] IanJ has joined #w3cdev 08:13:39 [maxf] maxf has joined #w3cdev 09:35:57 [dsr] dsr has joined #w3cdev
http://www.w3.org/2009/11/05-w3cdev-irc
CC-MAIN-2015-18
refinedweb
14,911
75.81
Created on 2014-04-14 17:31 by ballingt, last changed 2015-07-24 04:00 by python-dev. This issue is now closed. """ inspect.getsourcelines incorrectly guesses what lines correspond to the function foo see getblock in inspect.py once it finds a lambda, def or class it finishes it then stops so get getsourcelines returns only the first two noop decorator lines of bar, while normal behavior is to return all decorators as it does for foo """ import inspect from pprint import pprint def noop(arg): def inner(func): return func return inner @noop(1) @noop(2) def foo(): return 1 @noop(1) @noop(lambda: None) @noop(1) def bar(): return 1 pprint(inspect.getsourcelines(foo)) pprint(inspect.getsourcelines(bar)) "The code object's co_lnotab is how inspect should be getting the sourcelines of the code, instead of looking for the first codeblock." I'm looking at this now, thanks to Yhg1s for the above. This patch adds tests demonstrating broken behavior inspect.getsource and inspect.getsourcelines of decorators containing lambda functions, and modifies inspect.getsourcelines to behave correctly. We use co_lnotab to extract line numbers on all objects with a code object. inspect.getsourcelines can also take a class, which cannot use co_lnotab as there is no associated code object. @ballingt and I paired on this patch. Some open questions about inspect.getsource not created or addressed by this patch: - Is this a bug that should be patched in previous versions as well? - the docs for say it can take a traceback. What is the correct behavior here? There aren't any tests at the moment. We suggest the line of code that caused the traceback, i.e. the line at tb.tb_lineno - We added tests of decorated classes. The source of decorated classes does not include the decorators, which is different than the usual behavior of decorated functions. What is the correct behavior here? - inspect.getblock and inspect.BlockFinder use the term "block" in a way that is inconsistent with its typical use in the interpreter (that is, in ceval.c). Should this be renamed? If so, to what? ("chunk"?) v2 of the patch incorporating the comments at Apart from one nit, the patch is looking good. Also, could you please sign the contributor agreement, as described here: "- We added tests of decorated classes. The source of decorated classes does not include the decorators, which is different than the usual behavior of decorated functions. What is the correct behavior here?" There is an open issue for this,. It has a patch which uses inspect.unwrap in order to unwrap the decorated functions. Claudiu: I'll take a look at your patch, thanks! v3 of patch, including misc/news update, docstring for function, and removing class decorator tests, since it sounds like those are better handled in. v4 of patch, with tests updated for changed lines in inspect fodder file Patch reformatted to be non-git style, NEWS item removed Use dis.findlinestarts() to find lines of function instead of grubbing with co_lnotab manually, making dis module dependency non-optional. It sounds like at least a somewhat functional dis module is a pragmatic requirement for a Python implementation to support introspection, so +1 for reverting to the mandatory dependency on dis rather than duplicating its logic. New changeset ac86e5b2d45b by Antoine Pitrou in branch 'default': Issue #21217: inspect.getsourcelines() now tries to compute the start and I've committed the latest patch. Thank you, Thomas! Thanks Antoine! Could you add Allison Kaptur to NEWS and ACKS? This was an update to her original patch, and we paired on the whole thing. New changeset 582e8e71f635 by Benjamin Peterson in branch 'default': add Allison Kaptur (#21217) I strongly suspect that ac86e5b2d45b is the cause of the regression reported in #24485. def outer(): def inner(): inner1 from inspect import getsource print(getsource(outer)) omits the body of inner. Ditto if outer is a method. All is okay if outer is a method and the source of the class is requested. Could the authors, Allison and Thomas, take a look and try to fix _line_number_helper to pass the added test (and possibly incorporate it into findsource)? Since there is a new issue already, this one can be left closed and further work posted to #24485. Here's an update on #24485 regression. Looks like getsource() is now using code objects instead of tokenizer to determine blocks first/last lines. The problem with this particular case is that "inner" function's code object is completely independent from "outer"'s. So, for an outer() function below: def outer(): def inner(): never_reached1 never_reached2 the code object contains the following opcodes: 71 0 LOAD_CONST 1 (<code object inner ...>) 3 LOAD_CONST 2 ('outer1.<locals>.inner') 6 MAKE_FUNCTION 0 9 STORE_FAST 0 (inner) 12 LOAD_CONST 0 (None) 15 RETURN_VALUE The correct solution is to use co_lnotab along with co_firstlineno to iterate through opcodes recursively accounting for MAKE_FUNCTION's code objects. *However*, I don't think we can solve this for classes, i.e. def outer_with_class(): class Foo: b = 1 a = 2 there is no way (as far as I know) to get information about the Foo class start/end lineno. I think that the only way we can solve this is to revert the patch for this issue. > I think that the only way we can solve this is to revert the patch for this issue. I agree with this. It seems like doing this analysis at the bytecode level is the wrong approach. Perhaps the syntactical analysis being used before should be beefed up to handle things like the lambda case. FYI, I posted a patch to handle this case and the regression noted in issue24485 on issue24485. New changeset 4e42a62d5648 by Yury Selivanov in branch '3.5': Issue #24485: Revert backwards compatibility breaking changes of #21217. New changeset 98a2bbf2cce2 by Yury Selivanov in branch 'default': Merge 3.5 (issues #21217, #24485). New changeset 5400e21e92a7 by Meador Inge in branch '3.5': Issue #24485: Function source inspection fails on closures. New changeset 0e7d64595223 by Meador Inge in branch 'default': Issue #24485: Function source inspection fails on closures.
https://bugs.python.org/issue21217
CC-MAIN-2018-09
refinedweb
1,021
66.13
Asked by: best obfuscator Question All replies Hi I use Eziriz's .Net Reactor. Of course as you said they all claim that they're the best, but beside it's power and simplicity of use, Reactor is really cheap (Only about 160$) so it's not a big risk to test it; although I would advise you to use it for everything of it, not only the price. You can download the trial here: HTH cheers, farshad Hi As I mentioned, I don't advise .Net Reactor for it's price, but for it's power; I'm really satisfied with it. Another solution that I think should be a reliable one, is Xenocode; you can take a look on it, since I've only tested it's trial (the trial is fully functional) and have no idea on all of it's power. HTH cheers, farshad - One thing i noticed is that lost of obfuscator out there are more or less like "project product". They have a limited or sometimes none existent sale and support systems. The only one which looks commercially descent is dotfuscator (prob the reason why it is more pricey). So what is your take on Dotfuscator? please dont mind, Eziriz is just a crap. You loose all benifites of .Net and at runtime our application will consume 10-15 MB more memory than the original exe. You can use Reflection. I tried and it and never used it again. Xenocode is good but try {smartassembly} is best in my opinion, price, performance, facilities The Best. Have {smartassembly} professional version with 1 year Subcription to Error Reporting Webservice, which sends compleet stack trace a very usefull way. I'm not from that company :P lol I'm just a client but I loved the product. Standards Edtion of the same price as of Xenocode but I'll recomend Professianal version with Error Reporting sysetem becuase client cant never tell you the exact situation from error arosen. Some Comparison I did: 1) Assembly Processed with {smartassembly} is less in size than one processed with Xenocode. 2) Xenocode Preserves Namespace names which can never be Obfuscated and You give a hint to cracker that I have code related to security in Rizwansharp.Security namespace. while {smartassembly}.......... 3) In {smartassembly} there is a single option to encrypt all the string or not? setting this to true, all your strings are automatically enoded in Xenocode u have to mention 10000 string you used that what to encrypt and what not. 4) Its user interface is like 1,2,3 Done! I'm going to date my girlriend now:P. 5) I always got my queries reply within 3-4 hours Maximum. Price Xenocode = 400$ - smartassembly = 399$ - 799$ (3 Versions) Professional version is about 599$ and its awesome tool that works in minutes and your are Done! Etc................ Both products are available fully function for some 15-20 days and All my above things will be proved to you. Best Regards, - Proposed as answer by Dimitrakis Sunday, November 7, 2010 11:53 AM - There is no best obfuscator. Every product will have few shortcomings. If you want a professional, affordable product with various protection and obfuscation functionality, flexibility of use (Direct via UI, via command-line, via MSBuild), then take a look at Crypto Obfuscator ( )
https://social.msdn.microsoft.com/Forums/vstudio/en-US/4eb99036-68b2-4567-aac1-170f8e570122/best-obfuscator?forum=csharpgeneral
CC-MAIN-2019-30
refinedweb
555
61.97
16.8. Distributions¶ Now that we have learned about how to work with probability theory in both discrete and continuous setting, lets get to know some of the common random distributions encountered. Depending on the area of machine learning we are working in, we may potentially need to be familiar with vastly more of these, or for some areas of deep learning potentially none at all. This is, however, a good basic list to be familiar with. Let’s first import some common libraries. %matplotlib inline import d2l from IPython import display from math import erf, factorial import numpy as np 16.8.1. Bernoulli¶ This is the simplest random variable usually encountered. This is the random variable that encodes a coin flip which comes up \(1\) with probability \(p\) and \(0\) with probability \(1-p\). If we have a random variable with this distribution, we will write The cumulative distribution function is The probability mass function is plotted below. p = 0.3 d2l.set_figsize() d2l.plt.stem([0, 1], [1 - p, p], use_line_collection=True) d2l.plt.xlabel('x') d2l.plt.ylabel('p.m.f.') d2l.plt.show() Now, let us plot the cumulative distribution function. x = np.arange(-1, 2, 0.01) F = lambda x: 0 if x < 0 else 1 if x > 1 else 1 - p d2l.plot(x, np.array([F(y) for y in x]), 'x', 'c.d.f.') If \(X \sim \mathrm{Bernoulli}(p)\), then: \(\mu_X = p\), \(\sigma_X^2 = p(1-p)\). We can sample an array of arbitrary shape from a Bernoulli random variable as follows. 1*(np.random.rand(10, 10) < p) array([[0, 0, 0, 0, 0, 1, 0, 1, 1, 0], [1, 0, 1, 0, 0, 0, 1, 0, 1, 1], [1, 1, 0, 1, 1, 1, 0, 1, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 0, 0, 1, 1, 1, 0, 0, 0, 1], [1, 0, 1, 1, 1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 1, 1, 0, 1], [0, 1, 0, 1, 1, 1, 0, 1, 0, 0], [1, 0, 0, 1, 1, 1, 1, 0, 1, 0]]) 16.8.2. Discrete Uniform¶ The next random variable encountered is a discrete uniform distribution. For our discussion here, we will assume that it is on the integers \(\{1,2,\ldots, n\}\), however any other set of values can be freely chosen. The meaning of the word uniform in this context is that every possible value is equally likely. The probability for each value \(i \in \{1,2,3,\ldots,n\}\) is \(p_i = \frac{1}{n}\). We will denote this relationship as The cumulative distribution function is Let us first plot the probabilty mass function. n = 5 d2l.plt.stem([i+1 for i in range(n)], n*[1 / n], use_line_collection=True) d2l.plt.xlabel('x') d2l.plt.ylabel('p.m.f.') d2l.plt.show() Now, let us plot the cumulative distribution function. x = np.arange(-1, 6, 0.01) F = lambda x: 0 if x < 1 else 1 if x > n else np.floor(x) / n d2l.plot(x, np.array([F(y) for y in x]), 'x', 'c.d.f.') If \(X \sim \mathrm{Uniform}(n)\), then: \(\mu_X = \frac{1+n}{2}\), \(\sigma_X^2 = \frac{n^2-1}{12}\). We can an array of arbitrary shape from a discrete uniform random variable as follows. Note that the range np.random.random_integers(1, n, size=(10, 10)) /var/lib/jenkins/miniconda3/envs/d2l-en-numpy2-0/lib/python3.7/site-packages/ipykernel_launcher.py:1: DeprecationWarning: This function is deprecated. Please call randint(1, 5 + 1) instead """Entry point for launching an IPython kernel. array([[3, 4, 5, 2, 1, 3, 1, 3, 5, 1], [1, 3, 3, 3, 2, 1, 1, 5, 2, 2], [2, 4, 1, 4, 3, 4, 2, 3, 3, 3], [1, 3, 2, 1, 1, 1, 4, 4, 5, 2], [5, 1, 2, 4, 2, 2, 2, 1, 4, 2], [1, 4, 2, 1, 5, 4, 4, 5, 1, 1], [1, 4, 3, 3, 2, 5, 2, 4, 4, 2], [2, 4, 3, 5, 4, 3, 5, 2, 5, 4], [3, 3, 3, 5, 5, 4, 3, 2, 3, 2], [5, 4, 1, 4, 3, 1, 2, 5, 1, 4]]) 16.8.3. Continuous Uniform¶ Next let us discuss the continuous uniform distribution. The idea behind this random variable is that if we increase the \(n\) in the previous distribution, and then scale it to fit within the interval \([a,b]\), we will approach a continuous random variable that just picks an arbitrary value in \([a,b]\) all with equal probability. We will denote this distribution as The probability density function is The cumulative distribution function is Let us first plot the probabilty density function. a = 1; b = 3 x = np.arange(0, 4, 0.01) p = (x > a)*(x < b)/(b - a) d2l.plot(x, p, 'x', 'p.d.f.') Now, let us plot the cumulative distribution function. F = lambda x: 0 if x < a else 1 if x > b else (x - a) / (b - a) d2l.plot(x, np.array([F(y) for y in x]), 'x', 'c.d.f.') If \(X \sim \mathrm{Uniform}([a,b])\), then: \(\mu_X = \frac{a+b}{2}\), \(\sigma_X^2 = \frac{(b-a)^2}{12}\). We can an array of arbitrary shape from a uniform random variable as follows. Note that it by default samples from a \(\mathrm{Uniform}([a,b])\), so if we want a different range we need to scale it. (b - a) * np.random.rand(10, 10) + a array([[1.97863545, 2.19450855, 1.07654626, 1.47644602, 1.24225098, 1.41514747, 1.30560456, 1.32237796, 2.56736606, 2.35569971], [1.13285511, 1.66747206, 1.00935121, 1.65552948, 2.83082254, 1.59803672, 1.27982038, 2.05899742, 1.44309463, 2.71594239], [2.40546113, 2.81293025, 1.94324496, 2.08172079, 1.56761007, 1.39812837, 1.60949606, 1.25130137, 2.06889312, 2.4447495 ], [2.63942405, 1.68514547, 1.96110155, 1.76573757, 2.04322633, 2.10628951, 1.42376643, 2.44424897, 2.97011143, 2.57180954], [1.77310147, 1.84672146, 2.32683545, 2.74585997, 2.88942827, 1.02166986, 2.29433428, 2.1629317 , 1.06540308, 2.77064838], [2.99585323, 1.38269222, 1.74415853, 2.93155375, 2.43913259, 1.52552717, 1.07653743, 1.51226967, 2.43689424, 2.46427841], [1.28680264, 1.60810716, 1.81830373, 2.05617726, 2.63414142, 2.56777828, 2.71815194, 2.93125459, 2.0281178 , 1.21856788], [1.88541515, 2.27727245, 1.05775259, 2.81159144, 1.95347226, 2.4153804 , 2.33073618, 1.59514593, 1.39773888, 1.73778526], [1.14210304, 1.16283585, 2.15379447, 2.01919932, 1.14323683, 1.29073976, 2.10812538, 2.99692759, 2.4785342 , 2.75779667], [1.622822 , 2.49068741, 2.55102505, 1.83224874, 2.93238589, 1.33181034, 1.85743828, 2.29555114, 2.04772667, 2.87794657]]) 16.8.4. Binomial¶ Let us make things a little more complex and examine the binomial random variable. This random variable originates from performing a sequence of \(n\) independent experiments, each of which have probability \(p\) of succeeding, and asking how many successes we expect to see. Let us express this mathematically. Each experiment is an independent random variable \(X_i\) where we will use \(1\) to encode success, and \(0\) to encode failure. Since each is an independent coin flip which is successful with probability \(p\), we can say that \(X_i \sim \mathrm{Bernoulli}(p)\). Then, the binomial random variable is In this case, we will write To get the cumulative distribution function, we need to notice that getting exactly \(k\) successes can occur in \(\binom{n}{k} = \frac{n!}{k!(n-k)!}\) ways each of which has a probability of \(p^m(1-p)^{n-m}\) of occuring. Thus the cumulative distribution function is Let us first plot the probabilty mass function. n = 10 p = 0.2 # Compute binomial coefficient def binom(n, k): comb = 1 for i in range(min(k, n - k)): comb = comb * (n - i) // (i + 1) return comb pmf = np.array([p**i * (1-p)**(n - i) * binom(n, i) for i in range(n + 1)]) d2l.plt.stem([i for i in range(n + 1)], pmf, use_line_collection=True) d2l.plt.xlabel('x') d2l.plt.ylabel('p.m.f.') d2l.plt.show() Now, let us plot the cumulative distribution function. x = np.arange(-1, 11, 0.01) cmf = np.cumsum(pmf) F = lambda x: 0 if x < 0 else 1 if x > n else cmf[int(x)] d2l.plot(x, np.array([F(y) for y in x.tolist()]), 'x', 'c.d.f.') While this result is not simple, the means and variances are. If \(X \sim \mathrm{Binomial}(n,p)\), then: \(\mu_X = np\), \(\sigma_X^2 = np(1-p)\). This can be sampled as follows. np.random.binomial(n, p, size = (10,10)) array([[2, 2, 2, 1, 2, 1, 0, 2, 0, 0], [3, 2, 1, 1, 2, 4, 4, 0, 2, 5], [2, 1, 1, 2, 4, 3, 3, 2, 3, 0], [0, 4, 3, 0, 4, 2, 1, 1, 2, 5], [1, 4, 1, 2, 2, 0, 4, 0, 3, 1], [0, 1, 2, 1, 3, 2, 2, 2, 4, 4], [3, 3, 2, 1, 3, 3, 1, 2, 0, 1], [1, 2, 1, 2, 1, 3, 3, 4, 0, 5], [1, 0, 1, 3, 2, 1, 3, 2, 0, 3], [1, 2, 2, 2, 1, 1, 2, 2, 1, 3]]) 16.8.5. Poisson¶ Let us now perform a thought experiment. Let us say we are standing at a bus stop and we want to know how many buses will arrive in the next minute. Lets start by considering \(X^{(1)} \sim \mathrm{Bernoulli}(p)\) Which is simply the probability that a bus arrives in the one minute window. For bus stops far from an urban center, this might be a pretty good approximation since we will never see more than one bus at a time. However, if we are in a busy area, it is possible or even likely that two buses will arrive. We can model this by splitting our random variable into two parts for the first 30 seconds, or the second 30 seconds. In this case we can write where \(X^{(2)}\) is the total sum, and \(X^{(2)}_i \sim \mathrm{Bernoulli}(p/2)\). The total distribution is then \(X^{(2)} \sim \mathrm{Binomial}(2,p/2)\). Why stop here? Let us continue to split that minute into \(n\) parts. By the same reasoning as above, we see that Let us consider these random variables. By the previous section, we know that this has mean \(\mu_{X^{(n)}} = n(p/n) = p\), and variance \(\sigma_{X^{(n)}}^2 = n(p/n)(1-(p/n)) = p(1-p/n)\). If we take \(n \rightarrow \infty\), we can see that these numbers stabilize to \(\mu_{X^{(\infty)}} = p\), and variance \(\sigma_{X^{(\infty)}}^2 = p\)! What this indicates is that there could be some random variable we can define which is well defined in this infinite subdivision limit. This should not come as too much of a surprise, since in the real world we can just count the number of bus arrivals, however it is nice to see that our mathematical model is well defined. This result is known as the law of rare events. Following through this reasoning carefully, we can arrive at the following model. We will say that \(X \sim \mathrm{Poisson}(\lambda)\) if it is a random variable which takes the values \(\{0,1,2,\ldots\}\) with probability The value \(\lambda > 0\) is known as the rate, and denotes the average number of arrivals we expect in one unit of time (note that we above restricted our rate to be less than zero, but that was only to simplify the explanation). We may sum this probability mass function to get the cumulative distribution function. Let us first plot the probabilty mass function. lam = 5.0 xs = [i for i in range(20)] pmf = np.array([np.exp(-lam) * lam**k / factorial(k) for k in xs]) d2l.plt.stem(xs, pmf, use_line_collection=True) d2l.plt.xlabel('x') d2l.plt.ylabel('p.m.f.') d2l.plt.show() Now, let us plot the cumulative distribution function. x = np.arange(-1, 21, 0.01) cmf = np.cumsum(pmf) F = lambda x: 0 if x < 0 else 1 if x > n else cmf[int(x)] d2l.plot(x, np.array([F(y) for y in x.tolist()]), 'x', 'c.d.f.') As we saw above, the means and variances are particularly concise. If \(X \sim \mathrm{Poisson}(\lambda)\), then: \(\mu_X = \lambda\), \(\sigma_X^2 = \lambda\). This can be sampled as follows. np.random.poisson(lam, size=(10, 10)) array([[ 8, 6, 6, 4, 3, 4, 6, 8, 5, 6], [ 3, 3, 4, 2, 3, 5, 5, 8, 7, 6], [ 2, 7, 4, 6, 6, 7, 9, 6, 3, 3], [ 6, 7, 8, 4, 7, 2, 2, 6, 1, 4], [ 4, 7, 2, 2, 5, 7, 6, 5, 10, 5], [ 6, 3, 3, 5, 2, 6, 11, 1, 6, 7], [ 6, 6, 5, 7, 5, 6, 3, 3, 8, 8], [ 5, 3, 5, 6, 5, 3, 3, 9, 3, 4], [ 5, 7, 10, 4, 4, 2, 3, 6, 7, 5], [ 7, 5, 7, 6, 5, 1, 2, 4, 3, 8]]) 16.8.6. Gaussian¶ Now Let us try a different, but related experiment. Let us say we again are performing \(n\) independent \(\mathrm{Bernoulli}(p)\) measurements \(X_i\). The distribution of the sum of these is \(X^{(n)} \sim \mathrm{Binomial}(n,p)\). Rather than taking a limit as \(n\) increases and \(p\) decreases, Let us fix \(p\), and then send \(n \rightarrow \infty\). In this case \(\mu_{X^{(n)}} = np \rightarrow \infty\) and \(\sigma_{X^{(n)}}^2 = np(1-p) \rightarrow \infty\), so there is no reason to think this limit should be well defined. However, not all hope is lost! Let us just make the mean and variance be well behaved by defining This can be seen to have mean zero and variance one, and so it is plausible to believe that it will converge to some limiting distribution. p = 0.2 ns = [1, 10, 100, 1000] d2l.plt.figure(figsize=(10, 3)) for i in range(4) : n = ns[i] pmf = np.array([p**i * (1-p)**(n-i) * binom(n, i) for i in range(n + 1)]) d2l.plt.subplot(1, 4, i + 1) d2l.plt.stem([(i - n*p)/np.sqrt(n*p*(1 - p)) for i in range(n + 1)], pmf, use_line_collection=True) d2l.plt.xlim([-4, 4]) d2l.plt.xlabel('x') d2l.plt.ylabel('p.m.f.') d2l.plt.title("n = {}".format(n)) d2l.plt.show() One thing to note: compared to the Poisson case, we are now diving by the standard deviation which means that we are squeezing the possible outcomes into smaller and smaller areas. This is an indication that our limit will no longer be discrete, but rather a continuous distribution. A derivation of what occurs is well beyond the scope of this document, but the central limit theorem states that as \(n \rightarrow \infty\), this will yield the Gaussian Distribution (or sometimes Normal distribution). More explicitly, for any \(a,b\): where we say a random variable is normally distributed with given mean \(\mu\) and variance \(\sigma^2\), written \(X \sim \mathcal{N}(\mu,\sigma^2)\) if \(X\) has density Let us first plot the probability density function. mu = 0; sigma = 1 x = np.arange(-3, 3, 0.01) p = 1 / np.sqrt(2 * np.pi * sigma**2) * np.exp(-(x - mu)**2 / (2 * sigma**2)) d2l.plot(x, p, 'x', 'p.d.f.') Now, let us plot the cumulative distribution function. def phi(x): return (1.0 + erf((x - mu) / (sigma * np.sqrt(2)))) / 2.0 d2l.plot(x, np.array([phi(y) for y in x.tolist()]), 'x', 'c.d.f.') Keen-eyed readers will recognize some of these terms. Indeed, we encountered this integral we encountered in Section 16.5. Indeed we need exactly that computation to see that this \(p_X(x)\) has total area one and is thus a valid density. Our choice of working with coin flips made computations shorter, but nothing about that choice was fundamental. Indeed, if we take any collection of independent identically distributed random variables \(X_i\), and form then will be approximately Gaussian. This is the reason that the Gaussian is so central to probability, statistics, and machine learning. Whenever we can say that something we measured is a sum of many small independent contributions, we can safely assume that the thing being measured will be close to Gaussian. There are many more fascinating properties of Gaussians than we can get into at this point. In particular, the Gaussian is what is known as a maximum entropy distribution. We will get into entropy more deeply in Section 16.11, however all we need to know at this point is that it is a measure of randomness. In a rigorous mathematical sense, we can think of the Gaussian as the most random choice of random variable with fixed mean and variance. Thus, if we know that our random variable has some mean and variance, the Gaussian is in a sense the most conservative choice of distribution we can make. To close the section, Let us recall that if \(X \sim \mathcal{N}(\mu,\sigma^2)\), then: \(\mu_X = \mu\), \(\sigma_X^2 = \sigma^2\). We can sample from the Gaussian (or normal) as shown below. np.random.normal(mu, sigma, size=(10, 10)) array([[-1.04883453, 1.99713339, -0.17245203, -0.77078552, 1.5669609 , -1.00079803, 0.33799476, -2.12561343, -1.29440599, -0.95701883], [ 0.12783447, 0.18120036, 1.8743029 , -0.78009534, -0.06358307, -0.94137759, -0.5893523 , -0.58830864, -2.73688688, -1.07817684], [-0.4129788 , -0.07174383, 0.41423889, 1.49778924, -1.48863642, 0.49999226, 1.33907339, 0.35494098, 1.23655945, -0.8098314 ], [-0.54092773, 0.29345958, 1.58641094, 1.45543101, 0.38843555, -2.27823182, -0.57397205, -1.34373465, -1.57065532, -0.28093902], [ 1.31192687, 0.72844953, -0.24974771, 0.00726044, 0.02537142, -0.06178455, -0.02734276, -1.47112918, -0.63595471, 1.36091162], [-2.2836408 , -0.29064776, -1.85607859, 0.72140302, -1.55988144, -0.26955426, -0.73851607, -1.35397422, 1.07783342, 0.78141596], [-1.33214416, 0.97064713, -0.3337505 , -1.09735993, -0.99878837, 0.47622934, -1.06541807, -1.03610565, 1.55951125, -2.2136247 ], [ 0.5519943 , 0.72446498, 1.7072299 , 0.64640045, -1.37111427, -0.52697155, -1.65955206, 0.82730395, 0.6660435 , 0.95897816], [ 2.0446095 , -0.03596171, -2.11955621, -1.90196606, -1.56178734, 0.08282825, 0.54516254, 0.45916109, 0.17302571, -0.6298943 ], [-0.873815 , -0.57184926, -1.13034752, 0.01794838, 1.56600009, 1.98312058, -0.72160458, -0.16692888, -0.09635971, -1.32565599]]) 16.8.7. Summary¶ Bernoulli random variables can be used to model events with a yes/no outcome. Discrete uniform distributions model selections from a finite set of possibilites. Continuous uniform distributions select from an interval. Binomial distributions model a series of Bernoulli random variables, and count the number of successes. Poisson random variables model the arrival of rare events. Gaussian random variables model the results of adding a large number of independent random variables together. 16.8.8. Exercises¶ What is the standard deviation of a random variable that is the difference \(X-Y\) of two indepent binomial random variables \(X,Y \sim \mathrm{Binomial}(16,1/2)\). If we take a Poisson random variable \(X \sim \mathrm{Poisson}(\lambda)\) and consider \((X - \lambda)/\sqrt{\lambda}\) as \(\lambda \rightarrow \infty\), we can show that this becomes approximately Gaussian. Why does this make sense? What is the probability mass function for a sum of two discrete uniform random variables on \(n\) elements?
https://www.d2l.ai/chapter_appendix_math/distributions.html
CC-MAIN-2019-47
refinedweb
3,294
71.34
#include <sp_rcontext.h> Internal function to allocate memory for arrays. Construct and properly initialize a new sp_rcontext instance. The static create-function is needed because we need a way to return an error from the constructor. Create an instance of appropriate Item_cache class depending on the specified type in the callers arena. Get the Handler_call_frame representing the currently active handler. Handle return from SQL-handler. Handle current SQL condition (if any). This is the public-interface function to handle SQL conditions in stored routines. Create and initialize an Item-adapter (Item_field) for each SP-var field. param thd Thread handle. Create and initialize a table to store SP-variables. param thd Thread handle. Pop and delete given number of sp_cursor instance from the cursor stack. Pop the Handler_call_frame on top of the stack of active handlers. Also pop the matching Diagnostics Area and transfer conditions. Pop and delete given number of sp_handler_entry instances from the handler call stack. Create a new sp_cursor instance and push it to the cursor stack. Create a new sp_handler_entry instance and push it to the handler call stack. Set CASE expression to the specified value. TODO Hypothetically, a type of CASE expression can be different for each iteration. For instance, this can happen if the expression contains a session variable (something like @VAR) and its type is changed from one iteration to another. In order to cope with this problem, we check type each time, when we use already created object. If the type does not match, we re-create Item. This also can (should?) be optimized. Arena used to (re) allocate items on. E.g. reallocate INOUT/OUT SP-variables when they don't fit into prealloced items. This is common situation with String items. It is used mainly in sp_eval_func_item(). Flag to end an open result set before start executing an SQL-handler (if one is found). Otherwise the client will hang due to a violation of the client/server protocol. Stack of caught SQL conditions. Array of CASE expression holders. Current number of cursors in m_cstack. Stack of cursors. Flag to tell if the runtime context is created for a sub-statement. This is a pointer to a field, which should contain return value for stored functions (only). For stored procedures, this pointer is NULL. Indicates whether the return value (in m_return_value_fld) has been set during execution. Top-level (root) parsing context for this runtime context. Collection of Item_field proxies, each of them points to the corresponding field in m_var_table. Virtual table for storing SP-variables. Stack of visible handlers. The stored program for which this runtime context is created.
https://dev.mysql.com/doc/dev/mysql-server/latest/classsp__rcontext.html
CC-MAIN-2020-24
refinedweb
438
61.53
str-netbeans - Framework struts-netbeans hai friends please provide some help "how to execute struts programs in netbeans IDE?" is requires any software or any supporting files to execute this. thanks friends in advance RMI in netbeans 6.5 RMI in netbeans 6.5 runing client in RMI using netbeans 6 NetBeans why Netbeans IDE is not commonly used Today by most of the companies NetBeans IDE Java NotesNetBeans IDE Sections: Introduction to NetBeans Downloading... Introduction [Notes written for NetBeans 5.0 rc 2 (better than the official 4.1)] NetBeans is a free, open-source, IDE which is available Axis 2 & Tomcat & NetBeans - WebSevices Axis 2 on NetBeans. I am using NetBeans 6.5, with Axis 2 1.4.1 and Tomcat 6.0.... that i can use it in my application too. I was told that Axis 2 and NetBeans...Axis 2 & Tomcat & NetBeans Hi all, I have 3 classes House.java NetBeans - IDE Questions NetBeans Can we use netbeans to create servlet, jsp pages?If yes means can you explain how it can be done? how to use netbeans for creating jsp...:// Hope that it will be helpful Create JSF Application Using NetBeans IDE as an applicable framework to generate any type of application. The NetBeans 6.1 IDE... Create JSF Application Using NetBeans IDE  ... environment (IDE) written in the Java programming language. The NetBeans project NetBeans IDE NetBeans IDE The NetBeans IDE, product of Sun Microsystems, is a free, open-source.... NetBeans IDE supports developers providing all the tools needed to create all struts struts in industry, struts 1 and struts 2. which is the best? which is useful as a professuional Have a look at the following link: Struts Tutorials netbeans - IDE Questions netbeans netbeans In netbeans, there are choices of books with their price. you check the book you wanted then click the purchase.the output should be the book with the price then you will get the total price of the book you purchase.how... or 2) and the second is the actual move (1-9). If it is a valid move it changes the value of that square to 1 or 2 (depending on whose turn it is and returns Netbeans Question. Netbeans Question. Ok here is my code- * * To change this template...]++; } System.out.printf("%s%16s\n", " Sum", " Frequency"); for (int sum = 2; sum <= 12; sum... to tally the actual combinations of rolls. A roll of 2 and 7 would be different than netbeans program netbeans program Hi. could someone build this for me in netbeans... with the library, including: 1) Adding a new item to the library 2) Viewing all stock... in the library (option 2) 2) Viewing all stock in the library: When selected best Struts material - Struts best Struts material hi , I just want to learn basic Struts.Please send me the best link to learn struts concepts Hi Manju...:// Thanks How to Configure Tomcat Apache in Netbeans IDE How to Configure Tomcat Apache in Netbeans IDE Can any one tell me how to add tomcat server in Netbeans IDE? With its steps ?? Thnx struts2 - Struts struts2 how to pre populate the fields using struts2 from the database hibernate on netbeans - Hibernate hibernate on netbeans is it possible for me to run the hibernate program on Netbeans IDE java -netbeans help - Java Beginners java -netbeans help a simple program in netbeans ide to add two...(); f.setVisible(true); } } 2)add.java: public class add { public static int.../mam, i developed frontend with netbeans with buttons and textarea.. am executing Development using Eclipse IDE. Struts 2 IDE This page... Framework based applications. The IDE for Struts 2 will actually make...Struts 2 Tutorial   Struts2 - Struts Struts2 Hi, I am using doubleselect tag in struts2.roseindia is giving example of it with hardcoded values,i want to use it with dynamic values. Please give a example of it. Thanks Designing of textfield arrays in Netbeans IDE - Swing AWT in NetBeans IDE.... How can i do this.........???? I have a code which... in NetBeans IDE form designing.... public javax.swing.JTextField...Designing of textfield arrays in Netbeans IDE Respected sir Struts 2 Downloads Struts 2 Downloads Struts is one of the best framework for developing enterprise web applications. Now Struts 2.... This page keep track of all the versions of Struts 2. Struts 2.0.9 Web Services Examples in NetBeans ; In this section we will develop webservices using NetBeans IDE. NetBeans IDE is one... and test the webservices very easily in the NetBeans IDE. NetBeans... in NetBeans IDE. Web Service In this example Introduction to Struts 2 Framework Struts 2 framework We will use the Eclipse IDE development...Introduction to Struts 2 Framework - Video tutorial of Struts 2 In this video... of the Struts 2 Framework. The Struts 2 framework is very elegant framework connection jsp to oracle database connections in netbeans ide how to connection jsp to oracle database connections in netbeans ide how to connect jsp to oracle database connections in netbeans ide?pls provide screenshots if possible develop structs application - Struts structs application in netbeans ide i need step by step process plz answer me Hi friend, For more information,Examples and Tutorials on Struts visit to : Thanks Netbeans Array help Netbeans Array help Ok here is my code- * * To change this template...]++; } System.out.printf("%s%16s\n", " Sum", " Frequency"); for (int sum = 2; sum <= 12... array to tally the actual combinations of rolls. A roll of 2 and 7 would launch a web application using java web start in netbeans ide? how to launch a web application using java web start in netbeans ide? Hi RoseIndia, I need to launch my web application(web pages-jsp) using java web start in Netbeans IDE, Please can anyone help me how to do this...I kn NetBeans JSF Tutorial Application Using NetBeans IDE The NetBeans IDE is a flexible arrangement... in NetBeans IDE. This example illustrates how to print hello world in JSF application using NetBeans IDE. Creating Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorialhelp/comment/81282
CC-MAIN-2013-20
refinedweb
1,040
66.74
I am new to both unity3D and C#. After creating my first "Hello world" program, I have a question about the public class object in script. In the level, I have two game objects (A & B). There are two scripts attached to these objects separately. The script A is as follows: public class ScriptA : MonoBehaviour { public int m_value1; public int m_value2; public int m_value3; // Use this for initialization void Start() { m_value1 = 0; m_value2 = 0; m_value3 = 0; } } and the script B is: public class ScriptB : MonoBehaviour { public ScriptA m_objectA; public int m_value1 = 0; public int m_value2 = 0; } I am now wondering that whether the m_objectA in scriptB only a reference of object A or the data of object A is stored in two places in my memory. I am really worried about this problem especially when I want to use object A to store enormous data. Thanks a lot! I think m_objectA just reference to object A. I believe that, becase if you have any sprite attach to A object and the B object call m_objectA.getComponent().sprite = anotherSprite; it will change on A (you can will on Game view). Hello, do you mind accepting my answer since it is correct. It really makes me stay motivated to answer these questions. Answer by AurimasBlazulionis · Sep 09, 2016 at 11:10 AM Classes stored in variables are only references. If you need different instances out of one, you will need to copy everything from the class. There are multiple solutions, one of them is deep copy. But do not do that with mono behaviours, unless you want to copy component to different object.. (SOLVED) Accessing (string) arrays by enumerations from another class 0 Answers C# - Creating Canvas UI button on click in code 2 Answers How to create a drop down menu in editor inspector 0 Answers Use a class like a function 1 Answer How to get the component in a new class 0 Answers
https://answers.unity.com/questions/1241220/is-the-public-class-object-in-scripts-only-a-refer.html?sort=oldest
CC-MAIN-2020-40
refinedweb
324
60.75
Today I’ll show you how to extract last few characters (a substring) from a string in Java. Here is the code to do this. public class SubStringEx{ /** Method to get the substring of length n from the end of a string. @param inputString - String from which substring to be extracted. @param subStringLength- int Desired length of the substring. @return lastCharacters- String @author Zaheer Paracha */ public String getLastnCharacters(String inputString, int subStringLength){ int length = inputString.length(); if(length <= subStringLength){ return inputString; } int startIndex = length-subStringLength; return inputString.substring(startIndex); } } Let’s say you want to get the last four digits of a Social Security number. Here is how you will do this by calling the method declared above. public static void main(String args[]){ String ssn ="123456789"; SubStringEx subEx = new SubStringEx(); String last4Digits = subEx.getLastnCharacters(ssn,4); System.out.println("Last 4 digits are " + last4Digits); //will print 6789 } This method is useful in scenarios where you don’t know the length of a string. For example, a variable that holds credit card number may have a variable length. If you want to extract the last four digits of the credit card this method will come in handy. Enjoy. Nice code…it was exactly what I was looking for. I am using it to display a new member account information in a JInternalFrame. It works great and saved me the time of having to start from scratch. Thanks! Excellent piece of code, It worked! Thank you.
http://zparacha.com/extract-substring-from-end-of-a-java-string
CC-MAIN-2017-51
refinedweb
245
74.79
Is there anyway to run the BT chat example or just use the BT APIs for MacOSX or Maemo? for example: #include <qbluetoothserviceinfo.h> #include <qbluetoothsocket.h> Thanks. Is there anyway to run the BT chat example or just use the BT APIs for MacOSX or Maemo? for example: #include <qbluetoothserviceinfo.h> #include <qbluetoothsocket.h> Thanks. You could check docs for the API, at least for me it looks that: its part of the 1.2 mobility, so any platform supporting mobility version 1.2 would be suited target for it. MaxOS is not one of them. OK thanks. How do I know what supports mobility 1.2? I did not see that. good place to start would be: from there you could get to: and see that N9 supports 1.2 OK this seems like a good link- compatibility Some MacOS support and Maemo. BTW I thought that I should have QT mobility as part of the SDK, is that right? I think my error is that I do not even have that. So in my .pro file I have INCLUDEPATH += ../../src/connectivity/bluetooth DEPENDPATH += ../../src/connectivity/bluetooth In my SDK I do not have this path. I would not think that the Bluetooth would be included in Mac OS, anyway, do check the examples on how to include the API into a project and try it out. Note that you might want to check the SDK updater and manually select any additionally needed modules,
http://developer.nokia.com/community/discussion/showthread.php/238199-BT-Chat-Example
CC-MAIN-2014-35
refinedweb
246
77.13
Hey Guys, Previously I had Windows VPN connection which dialed into the DFS host and thusly replicated the shares as well as allowed DFS Management though the DFS application. To lessen the burden on the internet connection I Have installed a 1/1 Private link between the two sites and have set up appropriate routing on the routers for the network ranges. I can ping/access each respective network. DFS however seems to be having an issue talking to the DFS host and reports on the replication groups: "Cannot connect to the domain. The LDAP server is unavailable. On the actual namespaces it reports "Delegation information fir the namespace cannot be queried. The specified domain does not exist or could not be contacted." I suspect that it might have something to do with IP6 as all my routing has only been done via IPv4 in the router. Whats even wierd'er is that PortQueryUI when running the test "Domains and Trust" can see and talk to all services on the DFS Host: 88 (Kerberos) 135 (RPC) 389 (LDAP) 445 (CIFS) 3268 (Global Catalog) I thought to circumvent this I could initiate a VPN direct to the local IP address of the server, so it would route that data over the Private link. It could also be a trust issue perhaps? I have imported the server certificate between servers and this has had no impact. Is anyone able to shed some light on this predicament? Cheers all :-> 8 Replies Jul 29, 2011 at 8:44 UTC IT Journey, LLC is an IT service provider. I haven't seen this before myself, but are you sure that you have a good AD connectivity from the remote site? Personally, I would setup a site-to-site VPN connection between the two sites with a domain controller at both ends. The end that is remote could have a read only domain controller. That would ensure that you always have connectivity to your domain controller and would allow you to enable link features that would allow users at the remote site to authenticate through their local domain controller rather than over the WAN to the other domain controller. Jul 29, 2011 at 9:25 UTC I spent a nice quiet night with two routers, two 2008 servers and the 'drop list' from the routers. I worked to get a computer at the remote site to be able to join a domain at the main site, become an AD controller, update AD, and process DFS. Here's the ports that I allow out of both ends (from ANY port on the opposite site to the listed ports) and in both ends (from ANY port on the opposite site to the listed ports): AD Trust Policy TCP 49155 AD Trust Policy TCP 49158 CFS TCP/UDP 455 Kerberos TCP 88 LDAP TCP 389 LDAP UDP 389 LDAP SSL TCP 636 Microsoft DFS TCP 5722 Microsoft DFSR TCP 49154 Netbios TCP/UDP 135 Netbios TCP 139 Netbios TCP 138 NTP TCP/UDP 123 And, don't forget ICMP to be able to ping the other end. Remember, that the DNS at both ends needs to be your domain DNS. The remote DNS here uses the main DNS as a forwarder. YMMV, and you may not need them all, but this is what's working for me. Jul 30, 2011 at 8:41 UTC IT Journey, LLC is an IT service provider. I wouldn't send AD traffic over the WAN unencrypted. Make sure that you have that traffic going through a VPN of some sort. That will link the networks and encrypt the traffic. Aug 3, 2011 at 3:29 UTC Hey Guys, The AD Connectivity only exists when I dial in via windows software vpn. I can ping and connect to all core services, yet something defiently is awry. I am using the dns settings of the parent network, which when ipconfig /display dns it doesn't display anything therefore all of the dns enteries for things like ldap gc ect are all uncontactable. I Would set up the windows Software vpn client, however the sonicwalls.. although they support GRE traffic, they can not initiate or terminiate a connection so the tunnel fails when making the connection. I might look at a third party vpn tunnel software... or once the site a dns server can be contacted and accessed at site B, I'm sure it will all fall into place as I can as mentioned I can manually see all the ports for ldap ect on the ip address of the main server. which is what I was also trying to do, set up another dns server on the server at site b and initiate a secondary copy of zone transfer so it is stored locally, but because both are in the same domain i think it explains why I am running into trouble with that aswell. Thanks for the suggestions guys! Aug 3, 2011 at 6:56 UTC It's all about DNS issues...... I need to allow the server at site B to access and pull records from the dns server at site A. Aug 7, 2011 at 11:35 UTC We do that, and have the site B DNS server on a DC, and it updates and gets the DNS records from AD. For failover, site B uses site A's DNS as a secondary DNS server in site B. What server shows up when you run nslookup on site B server? From site B can you access the server a ( \\sitea ) Aug 8, 2011 at 1:15 UTC Hey Mpk, nslookup displays Site A's Server IP address aka the DNS server I have used for the connection. Defiantly driving me bonkers, lol. I have even via Windows VPN connected the server to the domain of Site A and logged into Site B server with the AD DC Main admin account and then disconnected the software vpn and still it won't take. as Craig M suggested, I could set windows vpn client to connect direct to the local ip address of the server, therefore forcing it to dial over the connection and then too data would be encrypted.... Sep 5, 2011 at 5:41 UTC Did you get this working? Running two VPN's through the same path usually causes trouble, so I'm not quite sure what the last bit is about. From your server b, can you 'see' the server A? 'net view \\servera'
https://community.spiceworks.com/topic/149453-dfs-over-private-link-not-quite-working
CC-MAIN-2017-09
refinedweb
1,085
74.93
Turn on or turn off self-service site creation (SharePoint Server 2010) Applies to: SharePoint Server 2010 Topic Last Modified: 2010-04-12 The self-service site creation feature in Microsoft SharePoint Server 2010 allows users who have the Use Self-Service Site Creation permission to create sites in defined URL namespaces. For more information about user and site permissions, see User permissions and permission levels (SharePoint Server 2010). To determine whether self-service site creation is a good practice for Web sites in your organization, see Plan sites and site collections (SharePoint Server 2010). In this article: To turn on or turn off self-service site creation by using Central Administration To turn on or turn off self-service site creation by using the Stsadm command-line tool Verify that the user account that is performing this task is a member of the Farm Administrators SharePoint group. On the SharePoint Central Administration Web site, click Application Management. On the Application Management page, click Manage Web Applications. Click the Web application for which you want to turn on or turn off self-service site creation. The ribbon becomes active. On the ribbon, click Self-Service Site Creation. On the Self-Service Site Collection Management page, configure the following settings: Specify whether self-service site creation is On (enabled) or Off (disabled) for the Web application. The default value is On. To require users of self-service site creation to supply a secondary contact name on the sign-up page, select Require secondary contact. Click OK to complete the operation. Verify that the user account that you use to run the Stsadm command-line tool is a member of the Administrators group on the local computer and a member of the Farm Administrators group. On the drive where SharePoint Server 2010 is installed, click Start, and then type command prompt into the text box. In the list of results, right-click Command Prompt, click Run as administrator, and then click OK. At the command prompt, type the following command: cd %CommonProgramFiles%\Microsoft Shared\Web server extensions\14\bin To turn on self-service site creation, type the following command: stsadm.exe -o enablessc -url <url> -requiresecondarycontact Where <url> is the URL of the Web application. This command turns on self-service site creation and requires a secondary contact. To turn off self-service site creation, type the following command: stsadm -o disablessc -url <url> Where <url> is the URL of the Web application. For more information, see Enablessc: Stsadm operation (Office SharePoint Server) and Disablessc: Stsadm operation (Office SharePoint Server).
https://technet.microsoft.com/en-us/library/cc261685.aspx
CC-MAIN-2018-17
refinedweb
429
51.99
Domain Models, Anemic Domain Models, and Transaction Scripts (Oh My!) Ever work on a small project (say 5-8 developers, a few hundred thousand lines of code) and get the feeling that the codebase is unreasonably large and difficult to navigate or use/reuse? Ever notice that other people keep duplicating logic -- like validation logic -- all over the place, violating the DRY principle every which way? Ever notice how difficult it is to change one part of your system without breaking lots of stuff in another part (I mean, this happens anyways, to a degree, but is it a common occurrence on your project)? It would seem that your project might be suffering the side effects of an anemic domain model. As I have observed that these models tend to be used mostly with a transaction script style design, I will use these two terms interchangeably. First, what are anemic domain models and transaction scripts? This has been discussed to death and there are tons of resources which describe an anemic domain model. I'll spare you from repeating these points, but I'll summarize this approach to design as creating a lot of dumb "shell" classes (essentially, all of your core domain objects are just DTOs) with everything essentially public (because an anemic domain model relies on components of a transaction script (typically things called "services") to act on them and modify their state, typically breaking encapsulation). You'll recognize it if you are constantly using classes with a "Service" or "Util" or "Manager" suffix. The wiki page for anemic domain model has a nice summary of why you should avoid using them: - Logic cannot be implemented in a truly object oriented way unless wrappers are used, which hide the anemic data structure. - Violation of the principals information hiding and encapsulation. - Necessitates a separate business layer to contain the logic otherwise located in a domain model. It also means that domain model's objects cannot guarantee their correctness at any moment, because their validation and mutation logic is placed somewhere outside (most likely in multiple places). - Necessitates a global access to internals of shared business entities increasing coupling and fragility. - Facilitates code duplication among transactional scripts and similar use cases, reduces code reuse. - Necessitates a service layer when sharing domain logic across differing consumers of an object model. - Makes a model less expressive and harder to understand. I'll admit: I've been guilty of this very pattern (or anti-pattern, if you want to be an "object bigot"). It's not that it's a bad thing. In fact, as Greg Young argues, it's a prefectly suitable pattern in some cases. Fowler himself says that there are virtues to this pattern: The glory of Transaction Script is its simplicity. Organizing logic this way is natural for applications with only a small amount of logic, and it involves very little overhead either in performance or in understanding. It's hard to quantify the cutover level, especially when you're more familiar with one pattern than the other. You can refactor a Transaction Script design to a Domain Model design, but it's harder than it needs to be. However much of an object bigot your become, don't rule out Transaction Script. there are a lot of simple problems out there, and a simple solution will get you up and running faster. (PoEAA p.111-112) So it's not that it's inherently a bad design, what I've found through my own experience, is that it doesn't scale well. What does this even mean? Again, I admit a certain level of ignorance; it's not until about a year and a half ago that I finally "got it". I had read through Fowler's book a few years ago, and found it difficult to grasp what it meant to write an application using a domain model. Fowler himself only spends some 9 pages discussing the topic and, in his closing paragraph, "chickens out" on providing an end-to-end example. In working on my current project with several junior consultants, I've found myself trying to explain what it means to design a software system using a domain model as opposed to a transaction script model. In doing so, I think I've whittled it down to a pretty simple set of examples and the simplest explanation of why a domain model is far superior to a transaction script model when working on a team of more than one . As an aside, IMO, as soon as you're programming in a team of more than one, you're writing an API or a framework of some sort, even if on a very small scale and very loosely defined. One of the key benefits of a domain model approach is that it hides the complexity of the different components behind the core objects of the problem domain. I like to think that in a transaction script model, the locus of control is placed outside of your core objects. In a well designed domain model, the locus of control is contained within your core objects (even if big pieces of functionality are still implemented in external services). Take a calendaring application for example. In a transaction script implementation, you'd have something like this to move a scheduled(_serverAddress); In this case, we now need to be aware of two services to interact with events in our calendaring application for moving a date and sending an event. Not only that, they're relatively hard to discover; whereas a method on the domain object would be easily discoverable via intellisense, it's not clear how a new user to the system would know which classes were involved in which business interactions without sufficient documentation and/or assistance from the long-timers. Now while I've intentionally left out the implementation of the event class, I'm not implying that there is any less code in the implementation (there may be more and it may be far more complex, involving the usage of dependency injection or inversion of control), but if we consider this code as a public API intended for use by others, clearly, the domain model approach is much more usable and approachable than a transaction script approach where a user of the API has to know many more objects and understand how they are supposed to interact. I would hardly call myself an expert on the subject (as I've written many an anemic domain model in my time). But to me, externally, the distinguishing feature of a domain model approach over a transaction script approach is that the intended usage of the codebase is more discoverable, even if the LOC in the actual implementation only differs by 1%. Visually, I think of the difference like this: Clearly, we can see that one of the key benefits of a domain model approach is that there is less coupling between your calling code and your business logic (making it somewhat less painful to change implementation). Note that there aren't necessarily any less service classes in a domain model approach (although their APIs are likely dramatically different than the APIs of a transaction script model). We can also see that in a domain model approach, the caller or user of the API only has to know about the domain objects and may or may not know about the services in the background (if we were desigining for testability, we'd have some overrides that allow us to pass the concrete service for purposes of mocking). Interacting primarily with the domain objects has the benefit of making it easier to think about the business scenario and the business problem that you're trying to solve. Once I grasped this, I started to see the huge benefit that a domain model approach has over an anemic domain model, even on a two person team. Nowadays, I strongly believe that an anemic domain model/transaction script approach is suitable for only the smallest of application development environments: a one man team. Because as soon as you are expected to program different, interacting parts of a system in a team, class explosion becomes a real problem and hinderance to usability (which leads to high ramp up time) and discoverability (which leads to duplication and lots of "Oh, I didn't know we had that" or "I already impelementated that in that other service"). In such a scenario, documentation (which never exists) becomes even more important (and, if it exists, even more dense). One very real concern is that then the domain object will become far too complex and bloated. Fowler addresses this: A common concern with domain logic is bloated domain objects. As you build a screen to manipulate your orders you'll notice that some of the order behavior is only needed for it. If you put these responsibilities on the order, the risk is that the Order class will become too big because it's full of responsibilities that are only used in a single use case. This concern leads people to consider whether some responsibility is general, in which case it should sit in the order class, or specific, in which case it should sit in some usage-specific class, which might be a Transaction Script or perhaps the presentation itself. (Incidentally, that last part regarding putting logic in the presentation is what makes ASP.NET webforms to crappy: the design of the framework (and it doesn't help that most of the examples in books and MSDN) encourage this). However, Fowler makes the point that: The problem with separating usage-specific behavior is that it can lead to duplication. Behavior that's separated from the order is harder to find, so people tend to not see it and duplicate it instead. Duplication can quickly lead to more complexity and inconsistency, but I've found that bloating occurs much less frequently than predicted. If it does occur, it's relatively easy to see and not difficult to fix. My advice is to not separate usage-specific behavior. Put it all in the object that's the natural fit. Fix the bloating when, and if, it becomes a problem. This point is particlarly important; I think most of us recognize this scenario: you change some logic in one place and close a bug ticket only to have another one opened up somewhere else regarding the same issue because you forgot to copy your fix over. Blech! If your validation code is external of your object, it's easy to end up writing it in two use cases and only updating one (and happens quite often, in my experience). Even if you move your validation code into a common "*Service" class, it is still less discoverable to a new user (well, even team members that have been on the project the whole time, actually) than a method on the class itself. Again, the point is that discoverability and reducing surface area can aid dramatically in terms of cutting down duplication of logic in your codebase. IMO, a domain model style application design is the way to go. It's hard to make a case for transaction scripts or anemic domain models. Granted: a domain model is no silver bullet and can add significantly to the initial difficulty of implemetation, but I think that as your codebase matures and grows, the long term savings from an initial investment in setting up the framework (and mindset) to support a domain model is more than worth it, even if you have to build one and throw it away to learn how. 6 Books That Should Be On Every .NET Developers Bookshelf As I've been working on my current project, I've found that many freshman developers who want to get better often have a hard time navigating the huge amount of resources out there to improve their skills. It's hard to commit given the cost of some of these books (usually at least $30 a pop) and the time it takes to make it through them if you don't know where to begin. IMO, there are six books which are must reads for any serious .NET developer with aspirations for architecture (I'd probably recommend reading them in this order, too): - Pro C# by Andrew Troelson which covers just enough about the CLR and the high level concepts in .NET programming with enough readability to not put you to sleep every time you crack it open. Even if you never use some of the features of the platform, it's important to know what's possible on the platform as it influences your design and implementation choices. While the "bookend" chapters aren't necessarily that great, the middle chapters are invaluable. - Framework Design Guidelines by Cwalina and Abrams which provides a lot of insight into the guidelines that Microsoft implemented internally in designing the .NET framework. This is important for writing usable code, in general. I tend to think that all code that I write is -- on some level -- a framework (even if in miniature). Many otherwise tedious discussions on standards, naming, type/member design, etc. can be resolved by simply referencing this book as The Guideline. - Design Patterns by GoF all too often, I come across terrible, terrible code where the original coder just goes nuts with if-else, switch-case statements...basic knowledge of how to resolve these issues leads to more maintainable and easier to read code. At the end of the day, design patterns are aimed at helping you organize your code into elements that are easier to understand conceptually, easier to read, easier to use, and easier to maintain. It's about letting the structure of your objects and the interactions between your objects dictate the flow of logic, not crazy, deeply nested conditional logic (I hope to never again see another 2000+ line method...yes, a single method). - Patterns of Enterprise Application Architecture by Fowler My mantra that I repeat to clients and coworkers is that "it's been done before". There are very few design scenarios where you need to reinvent the wheel from the ground up. For business applications, many of the common patterns are documented and formalized in this book (to be paired with Design Patterns). This and Design Patterns are must reads for any developer that is seriously aspiring to be a technical lead or technical architect. As the .NET framework matures and we diverge from the legacy programming, understanding design patterns is becoming more important to grasping the benefits and liabilities of designs and frameworks. For high level technical resources, it's important to understand how to write "clean" code by designing around object interactions; design patterns document these commonly recurring interactions. It is also a vocabulary and set of conventions by which programmers can communicate intent and usage of complex graphs of objects. - Code Complete 2nd Edition by McConnell While not C# or .NET specific, it delves into the deep nitty-gritty of the art of programming (and it's still very much an art/craft as opposed to a science). Too often, we lose sight of this core principle in our software construction process in our rush to "make it work", often leaving behind cryptic, unreadable, unmaintainable code. Some of the chapters in this book will definitely put you to sleep, but at the same time, it's filled with so much useful insight that it's worth trudging through this behemoth of a book. - Pragmatic Unit Testing in C# by Hunt. This book, perhaps more than any of the ones listed above, gives a much more practical view of the how's and why's of good objected oriented design. Designing for testability intrinsically means creating decoupled modules, classes, and methods; it forces you to think about your object structure and interactions in a completely different mindframe. Test driven development/design is good to learn in and of itself, but I think the biggest thing I got from reading this book was insight into the small changes in code construction for the purpose of testability that yield big dividends in terms of decoupling your code modules. I think that once you read this book, you'll start to really understand what it means to write "orthogonal code". Good luck! Automatic Properties (And Why You Should Avoid Them) Ah yes, automatic properties. Insn't it great that you don't have to do all of that extra typing now? (Well, you wouldn't be doing it anyways with ReSharper, but that's besides the point.) For some reason, they've never sat well with me; they just seem like a pretty useless feature and, not only that, I think it severely impacts readability. Quick, are these members in a class, an abstract class, or an interface? int Id { get; set; } string FirstName { get; set; } string LastName { get; set; } Can't tell! Perhaps you code at a leisurely pace, but when I'm in the zone, I'm flying around my desktop, flipping through tabs like crazy, ALT-TABbing between windows, and typing like a madman. It's happened to me more than once where I've been working in a file, trying to add some logic to a getter and getting weird errors only to realize that I was working in the interface or abstract class instead of the concrete class. Of course I don't normally write many non-public properties, but it's easy to make the mistake of missing the access modifier if you're working furiously and tabbing back and forth, especially if the file is long (so that you can see the class/interface declaration at the top of your file). Look again: public interface IEmployee { int Id { get; set; } string FirstName { get; set; } string LastName { get; set; } } public class Employee { int Id { get; set; } string FirstName { get; set; } string LastName { get; set; } } public abstract class AbstractEmployee { int Id { get; set; } string FirstName { get; set; } string LastName { get; set; } } It's even more confusing when you're working within an abstract class and there's implementation mixed in. Not only that, it looks like a complete mess as soon as you have to add custom logic to the getter or setter of a property (and add a backing field); it just looks so...untidy (but that's just me; I like to keep things looking consistent). I'm also going to stretch a bit and postulate that it may also encourage breaking sound design in scenarios where junior devs don't know any better since they won't think to use a private field when the situation calls for one (just out of laziness). I get that it's a bit more work (yeah, maybe my opinion would be different if I had to type them out all the time, too - but I don't :P), but seriously, if you're worried about productivity, then I really have to ask why you haven't installed ReSharper yet (I've been using it since 2.0 and can't imagine developing without it). It's easy to mistake one for the other if you're just flipping through tabs really fast. I've sworn off using them and I've been sticking to my guns on this. There are three general arguments that I hear, from time to time, from the opposition: - Too many keystrokes, man! With R#, you simply define all of your private fields and then ALT+INS and in less than 5 or 6 keystrokes, you've generated all of your properties. I would say even less keystrokes than using automatic properties since it's way easier to just write the private field and generate it using R#. If you're worried about productivity and keystrokes and you're not using R#, then what the heck are you waiting for? - Too much clutter, takes up too much space! If that's the case, just surround it in a region and don't look at it. I mean, if you really think about it, using KNF instead of Allman style bracing throughout your codebase would probably reduce whitespace clutter and raw LOC and yet... - They make the assembly heavier! Blah! Not true! Automatic properties are a compiler trick. They're still there, just uglier and less readable (in the event that you have to extract code from an assembly (and I have - accidentally deleted some source, but still had the assembly in GAC!)). In this case, the compiler generates the following fields:<FirstName>k__BackingField <Id>k__BackingField <LastName>k__BackingField Depending on the project, there may also be unforseen design scenarios where you may want to get/set a private field by reflection to bypass logic implemented in a property (I dunno, maybe in a serialization scenario?). So my take? Just don't use them, dude! Update: To clarify, McConnell has a whole section of Code Complete which discusses "code shape" and how it affects readability (see chapter 31). I think this is along the same veins. You gain NOTHING by using automatic properties, but you sacrifice readability and clarity. I don't think that the argument of "saving a couple lines" is a valid one since you can just as easily collapse those into regions and save many more lines or even switch bracing styles. As McConnell writes: "Making the code look pretty is worth something, but it's worth less than showing the code's structure. If one technique show the structure better and another looks better, use the one that shows the structure better." "The smaller part of the job of programming is writing a program so that the computer can read it; the larger part is writing it so that other humans can read it." My belief is that using backing fields shows the structure of the class file better than using automatic properties (which was my point in the blog post). Automatic properties are a convenience for the author, but it sacrifices structural cues to the purpose and usage of a given code file, IMO. There is no benefit except to save a few keystrokes for the author, but in that case, even more keystrokes can be saved using R# and explicit backing fields. Why! Mercur? 'N . TARP Paying Off…At Least For Now. Whoa, caught whiff of this just now: On June 9, the Treasury Department announced that 10 of the largest financial institutions that participated in the Capital Purchase Program (through TARP) have been approved to repay $68 billion. Yes, they had to be approved to repay the money. The companies had to prove they no longer needed the money, because the government doesn't want them begging for more down the road. To date, those 10 companies have paid dividends on their preferred stock to the Treasury totaling about $1.8 billion, the Treasury announced. Overall, dividend payments from all of the 600 bank participants has come to about $4.5 billion so far. That's commensurate with the 5 percent (annualized) dividend return that was part of the terms of the program. Bank analyst Bert Ely said while the government may end up losing money on investments in some financial firms, it's likely the entirety of the bank portion of the TARP will ultimately turn a profit. The 5 percent paid in dividends on preferred stock purchased by the Treasury will certainly outpace the interest rate on money borrowed to finance the program, he said. And the warrants could also prove profitable. "People think the government gave banks money," Ely said. "They made investments in banks." So we could still end up losing money, but at least for now, it seems like it was a wise move. XMPP SASL Challenge-Response Using DIGEST-MD5 In C# I've been struggling mightily with implementing the SASL challenge-response portion of an XMPP client I've been working on. By far, this has been the hardest part to implement as it's been difficult to validate whether I've implemented the algorithm correctly as there doesn't seem to be any (easy to find) open source implementations of of SASL with the DIGEST-MD5 implementation (let alone in C#). The trickiest part of the whole process is building the response token which gets sent back as a part of the message to the server. RFC2831 documents the SASL DIGEST-MD5 authentication mechanism as such: Let { a, b, ... } be the concatenation of the octet strings a, b, ... Let H(s) be the 16 octet MD5 hash [RFC 1321] of the octet string s. If the "qop" directive's value is "auth", then A2 is: A2 = { "AUTHENTICATE:", digest-uri-value } If the "qop" value is "auth-int" or "auth-conf" then A2 is: A2 = { "AUTHENTICATE:", digest-uri-value, ":00000000000000000000000000000000" } Seems simple enough, right? Not! It took a bit of time to parse through it mentally and come up with an impelementation, but I was still failing (miserably). The breakthrough came when I stumbled upon a posting by Robbie Hanson:. The most important part of his posting is the last line (and that it included the intermediate hexadecimal string results. Win! Now I finally had some sample data to compare against to figure out where I was going wrong). At one critical junction in my implementation of the algorithm, I was converting the MD5 hash value to a hexadecimal string -- thank goodness for Robbie's clarification of that point! Armed with this test data, I was finally able to get it all working. Here is the test code: using MbUnit.Framework; using Xmpp.Client; namespace Xmpp.Tests { [TestFixture] public class SaslChallengeResponseTests { [Test] public void TestCreateResponse() { // See example here: // h1=3a4f5725a748ca945e506e30acd906f0 // a1Hash=b9709c3cdb60c5fab0a33ebebdd267c4 // a2Hash=2b09ce6dd013d861f2cb21cc8797a64d // respHash=37991b870e0f6cc757ec74c47877472b SaslChallenge challenge = new SaslChallenge( "md5-sess", "utf-8", "392616736", "auth", "osXstream.local"); SaslChallengeResponse response = new SaslChallengeResponse( challenge, "test", "secret", "05E0A6E7-0B7B-4430-9549-0FE1C244ABAB"); Assert.AreEqual("3a4f5725a748ca945e506e30acd906f0", response.UserTokenMd5HashHex); Assert.AreEqual("b9709c3cdb60c5fab0a33ebebdd267c4", response.A1Md5HashHex); Assert.AreEqual("2b09ce6dd013d861f2cb21cc8797a64d", response.A2Md5HashHex); Assert.AreEqual("37991b870e0f6cc757ec74c47877472b", response.ResponseTokenMd5HashHex); } } } I modeled the SASL challenge like so: using System; using System.Collections.Generic; using System.Diagnostics; using System.Reflection; using System.Text; namespace Xmpp.Client { /// <summary> /// Represents a SASL challenge in object code. /// </summary> public class SaslChallenge { private static readonly Dictionary<string, FieldInfo> _fields; private readonly string _rawDecodedText; private string _algorithm; private string _charset; private string _nonce; private string _qop; private string _realm; /// <summary> /// Initializes the <see cref="SaslChallenge"/> class. /// </summary> /// <remarks> /// Caches the properties which are set using reflection on <see cref="Parse"/>. /// </remarks> static SaslChallenge() { // Initialize the hash of fields. _fields = new Dictionary<string, FieldInfo>(); FieldInfo[] fields = typeof (SaslChallenge).GetFields( BindingFlags.NonPublic | BindingFlags.Instance); foreach (FieldInfo field in fields) { // Trim the _ from the start of the field names. string name = field.Name.Trim('_'); _fields[name] = field; } } /// <summary> /// Creates a specific SASL challenge message. /// </summary> public SaslChallenge(string algorithm, string charset, string nonce, string qop, string realm) { _algorithm = algorithm; _charset = charset; _nonce = nonce; _qop = qop; _realm = realm; Debug.WriteLine("algorithm=" + _algorithm); Debug.WriteLine("charset=" + _charset); Debug.WriteLine("nonce=" + _nonce); Debug.WriteLine("qop=" + _qop); Debug.WriteLine("realm=" + _realm); } /// <summary> /// Initializes a new instance of the <see cref="SaslChallenge"/> /// class based on the raw decoded text. /// </summary> /// <remarks> /// Use the <see cref="Parse"/> method to create an instance from /// a raw encoded message. /// </remarks> /// <param name="rawDecodedText">The raw decoded text.</param> private SaslChallenge(string rawDecodedText) { _rawDecodedText = rawDecodedText; string[] parts = rawDecodedText.Split(','); foreach (string part in parts) { string[] components = part.Split('='); string property = components[0]; _fields[property].SetValue(this, components[1].Trim('"')); } Debug.WriteLine("algorithm=" + _algorithm); Debug.WriteLine("charset=" + _charset); Debug.WriteLine("nonce=" + _nonce); Debug.WriteLine("qop=" + _qop); Debug.WriteLine("realm=" + _realm); } public string Realm { get { return _realm; } } public string Nonce { get { return _nonce; } } public string Qop { get { return _qop; } } public string Charset { get { return _charset; } } public string Algorithm { get { return _algorithm; } } public string RawDecodedText { get { return _rawDecodedText; } } /// <summary> /// Parses the specified challenge message. /// </summary> /// <param name="response">The base64 encoded challenge.</param> /// <returns>An instance of <c>SaslChallenge</c> based on /// the message.</returns> public static SaslChallenge Parse(string encodedChallenge) { byte[] bytes = Convert.FromBase64String(encodedChallenge); string rawDecodedText = Encoding.ASCII.GetString(bytes); return new SaslChallenge(rawDecodedText); } } } And finally, here is the challenge response class which contains the meat of the response building logic: using System; using System.Diagnostics; using System.Security.Cryptography; using System.Text; namespace Xmpp.Client { /// <summary> /// Partial implementation of the SASL authentication protocol /// using the DIGEST-MD5 mechanism. /// </summary> /// <remarks> /// See <see href=""/> and /// <see href=""/> for details. /// </remarks> public class SaslChallengeResponse { #region fields private static readonly Encoding _encoding; private static readonly MD5 _md5; private readonly SaslChallenge _challenge; private readonly string _cnonce; private readonly string _decodedContent; private readonly string _digestUri; private readonly string _encodedContent; private readonly string _password; private readonly string _realm; private readonly string _username; private string _a1Md5HashHex; private string _a2Md5HashHex; private string _responseTokenMd5HashHex; private string _userTokenMd5HashHex; #endregion #region properties /// <summary> /// Gets the final, base64 encoded content of the challenge response. /// </summary> /// <value>A base64 encoded string of the response content.</value> public string EncodedContent { get { return _encodedContent; } } /// <summary> /// Gets the unencoded content of the challenge response. /// </summary> /// <value>The response content in plain text.</value> public string DecodedContent { get { return _decodedContent; } } /// <summary> /// Gets the hexadecimal string representation of the user token MD5 /// hash value. /// </summary> /// <value>The hexadecimal representation of the user token MD5 hash /// value.</value> public string UserTokenMd5HashHex { get { return _userTokenMd5HashHex; } } /// <summary> /// Gets the hexadecimal string representation of the response token /// MD5 hash value. /// </summary> /// <value>The hexadecimal string representation of the response token /// MD5 hash value.</value> public string ResponseTokenMd5HashHex { get { return _responseTokenMd5HashHex; } } /// <summary> /// Gets the hexadecimal string representation of the A1 MD5 hash /// value (see RFC4422 and RFC2831) /// </summary> /// <value>The hexadecimal string representation of the A1 MD5 hash /// value (see RFC4422 and RFC2831)</value> public string A1Md5HashHex { get { return _a1Md5HashHex; } } /// <summary> /// Gets the hexadecimal string representation of the A2 MD5 hash /// value (see RFC4422 and RFC2831) /// </summary> /// <value>The hexadecimal string representation of the A2 MD5 hash /// value (see RFC4422 and RFC2831)</value> public string A2Md5HashHex { get { return _a2Md5HashHex; } } #endregion /// <summary> /// Initializes the <see cref="SaslChallengeResponse"/> class. /// </summary> static SaslChallengeResponse() { _md5 = MD5.Create(); _encoding = Encoding.UTF8; } /// <summary> /// Initializes a new instance of the <see cref="SaslChallengeResponse"/> /// class. /// </summary> /// <param name="challenge">The challenge.</param> /// <param name="username">The username.</param> /// <param name="password">The password.</param> public SaslChallengeResponse(SaslChallenge challenge, string username, string password) : this(challenge, username, password, null, null, null) { } /// <summary> /// Initializes a new instance of the <see cref="SaslChallengeResponse"/> /// class. /// </summary> /// <param name="challenge">The challenge.</param> /// <param name="username">The username.</param> /// <param name="password">The password.</param> /// <param name="cnonce">A specific cnonce to use.</param> public SaslChallengeResponse(SaslChallenge challenge, string username, string password, string cnonce) : this(challenge, username, password, null, null, cnonce) { } /// <summary> /// Initializes a new instance of the <see cref="SaslChallengeResponse"/> /// class. /// </summary> /// <param name="challenge">The challenge.</param> /// <param name="username">The username.</param> /// <param name="password">The password.</param> /// <param name="realm">A specific realm, different from the one in the /// challenge.</param> /// <param name="digestUri">The digest URI.</param> /// <param name="cnonce">A specific client nonce to use.</param> public SaslChallengeResponse(SaslChallenge challenge, string username, string password, string realm, string digestUri, string cnonce) { _challenge = challenge; _username = username; _password = password; if (string.IsNullOrEmpty(_challenge.Realm)) { _realm = realm; } else { _realm = challenge.Realm; } if (string.IsNullOrEmpty(_realm)) { throw new ArgumentException("No realm was specified."); } if (string.IsNullOrEmpty(cnonce)) { _cnonce = Guid.NewGuid().ToString().TrimStart('{').TrimEnd('}') .Replace("-", string.Empty).ToLowerInvariant(); } else { _cnonce = cnonce; } if (string.IsNullOrEmpty(digestUri)) { _digestUri = string.Format("xmpp/{0}", _challenge.Realm); } else { _digestUri = digestUri; } // Main work here: _decodedContent = GetDecodedContent(); byte[] bytes = _encoding.GetBytes(_decodedContent); _encodedContent = Convert.ToBase64String(bytes); } /// <summary> /// Gets the body of the response in a decoded format. /// </summary> /// <returns>The raw response string.</returns> private string GetDecodedContent() { // Gets the response token according to the algorithm in RFC4422 // and RFC2831 _responseTokenMd5HashHex = GetResponse(); StringBuilder buffer = new StringBuilder(); buffer.AppendFormat("username=\"{0}\",", _username); buffer.AppendFormat("realm=\"{0}\",", _challenge.Realm); buffer.AppendFormat("nonce=\"{0}\",", _challenge.Nonce); buffer.AppendFormat("cnonce=\"{0}\",", _cnonce); buffer.Append("nc=00000001,qop=auth,"); buffer.AppendFormat("digest-uri=\"{0}\",", _digestUri); buffer.AppendFormat("response={0},", _responseTokenMd5HashHex); buffer.Append("charset=utf-8"); return buffer.ToString(); } /// <summary> /// HEX( KD ( HEX(H(A1)), { nonce-value, ":" nc-value, ":", cnonce-value, /// ":", qop-value, ":", HEX(H(A2)) })) /// </summary> private string GetResponse() { byte[] a1 = GetA1(); string a2 = GetA2(); Debug.WriteLine("a1=" + ConvertToBase16String(a1)); Debug.WriteLine("a2=" + a2); byte[] a2Bytes = _encoding.GetBytes(a2); byte[] a1Hash = _md5.ComputeHash(a1); byte[] a2Hash = _md5.ComputeHash(a2Bytes); _a1Md5HashHex = ConvertToBase16String(a1Hash); _a2Md5HashHex = ConvertToBase16String(a2Hash); // Let KD(k, s) be H({k, ":", s}) string kdString = string.Format("{0}:{1}:{2}:{3}:{4}:{5}", _a1Md5HashHex, _challenge.Nonce, "00000001", _cnonce, "auth", _a2Md5HashHex); Debug.WriteLine("kd=" + kdString); byte[] kdBytes = _encoding.GetBytes(kdString); byte[] kd = _md5.ComputeHash(kdBytes); string kdBase16 = ConvertToBase16String(kd); return kdBase16; } /// <summary> /// A1 = { H( { username-value, ":", realm-value, ":", passwd } ), /// ":", nonce-value, ":", cnonce-value } /// </summary> private byte[] GetA1() { string userToken = string.Format("{0}:{1}:{2}", _username, _realm, _password); Debug.WriteLine("userToken=" + userToken); byte[] bytes = _encoding.GetBytes(userToken); byte[] md5Hash = _md5.ComputeHash(bytes); // Use this for validation purposes from unit testing. _userTokenMd5HashHex = ConvertToBase16String(md5Hash); string nonces = string.Format(":{0}:{1}", _challenge.Nonce, _cnonce); byte[] nonceBytes = _encoding.GetBytes(nonces); byte[] result = new byte[md5Hash.Length + nonceBytes.Length]; md5Hash.CopyTo(result, 0); nonceBytes.CopyTo(result, md5Hash.Length); return result; } /// <summary> /// A2 = { "AUTHENTICATE:", digest-uri-value } /// </summary> private string GetA2() { string result = string.Format("AUTHENTICATE:{0}", _digestUri); return result; } /// <summary> /// Converts a byte array to a base16 string. /// </summary> /// <param name="bytes">The bytes to convert.</param> /// <returns>The hexadecimal string representation of the contents /// of the byte array.</returns> private string ConvertToBase16String(byte[] bytes) { StringBuilder buffer = new StringBuilder(); foreach (byte b in bytes) { string s = Convert.ToString(b, 16).PadLeft(2, '0'); buffer.Append(s); } Debug.WriteLine(string.Format("Converted {0} bytes", bytes.Length)); string result = buffer.ToString(); return result; } } } Happy DIGESTing! Leveraging The Windows Forms WebBrowser Control (For The Win) I've been working on a little utility to experiment with the XMPP protocol. The idea was to write a tool that would allow me to send, receive, and display the XML stream and XML stanza messages at the core of XMPP. Of course, I could have implemented it using a simple multi-line text box for the XML entry, but that would mean that I wouldn't have nice things like syntax highlighting (for XML) and nice (auto) indentation. On the desktop, I'm not familiar with any free Windows Forms editor controls that are capable of syntax highlighting. But on the web side, there are several free, open source script libraries at our disposal. For example, CodePress, EditArea, and CodeMirror. I chose CodeMirror for this application as it was the simplest library that met my needs. There really aren't any tricks to this aside from getting the URL correct. In this case, I have my static HTML content in a directory in my project: And I set the content files to "Copy always" in the properties pane for the files so that they get copied to the output directory of the project. To set the correct path, I find the executing directory of the application and set the URL properly: protected override void OnLoad(EventArgs e) { if (!DesignMode) { string path = Path.Combine( Environment.CurrentDirectory, "HTML/input-page.html"); path = path.Replace('\\', '/'); _inputBrowser.Url = new Uri(path); } base.OnLoad(e); } Note that I check if the control is in design mode (the designer throws an error if you don't do this since the path points to Visual Studio's runtime directory instead of your applications output). Now all that's left is to get the script and style references correct in your HTML page: <script src="../HTML/CodeMirror/js/codemirror.js" type="text/javascript"></script> <link rel="stylesheet" type="text/css" href="../HTML/CodeMirror/css/docs.css" /> And in your script: <script type="text/javascript"> var editor = CodeMirror.fromTextArea('code', { height: "210px", parserfile: "parsexml.js", stylesheet: "../HTML/CodeMirror/css/xmlcolors.css", path: "../HTML/CodeMirror/js/", continuousScanning: 500, lineNumbers: true, textWrapping: false }); </script> The final part is getting your Windows forms code talking to the Javascript in the browser. In this case, I've written some simple Javascript that interacts with the editor control: <script type="text/javascript"> var $g = document.getElementById; function resize(height) { var textArea = $g("code"); // Get the iframe. var editor = textArea.nextSibling.firstChild; editor.style.height = (height - 1) + "px"; } function reset() { editor.setCode(""); } function addContent(data) { var code = editor.getCode(); editor.setCode(code + data); editor.reindent(); } function setContent(data) { editor.setCode(data); editor.reindent(); // Scroll to bottom. editor.selectLines(editor.lastLine(), 0); } function getContents() { return editor.getCode(); } </script> From the application code, we can call these script functions using the InvokeScript method: private void AddStreamMessageInternal(string data) { if (_streamBrowser.Document != null) { // Get the contents string code = (string) _streamBrowser.Document.InvokeScript( "getContents", new object[] {}); code = code + data; code = code.Replace("><", ">\r\n<"); _streamBrowser.Document.InvokeScript( "setContent", new object[] {code}); } } private void AdjustSize() { // Call a resize method to change the HTML editor size. if (_streamBrowser.Document != null) { _streamBrowser.Document.InvokeScript( "resize", new object[] {_streamBrowser.ClientSize.Height}); } } public void RefreshBrowserContents() { if (_streamBrowser.Document != null) { _streamBrowser.Document.InvokeScript( "reset", new object[] {}); } } Awesome! You can see that I can both pass arguments into the Javascript functions and read the return data from the Javascript function as well. The big win is that now you can take advantage of your favorite Javascript utilities in your Windows Forms applications.
http://charliedigital.com/2009/07/
CC-MAIN-2014-52
refinedweb
6,250
53.92
#include <Dhcp.h>#include <Dns.h>#include <Ethernet.h>#include <EthernetClient.h>#include <EthernetServer.h>#include <EthernetUdp.h>#include <util.h> i dont know if it always has been like it, but from programming perspective it seams good.You see all libraries for various sub parts of TCP/IP setup.As a programmer you can decide if you really want them.For example if you use a static IP adress you can omit DHCP by put a mark in front of it //And most services dont rely on udp, as most connections are tcp udp is for broadcasting and packets can be missed without problems (used in voice over ip youtube etc).DNS well ... you might try to mark it out and see if your code still works.Not sure if you need it, its a lookup service to get a remote IP, but your connecting clients allready have an ip adres so as long as your solution doesnt have to resolve etc.. it might work without it.And if your solution only has a server role (providing data on request)..a client might not be needed too.So despite i dont have a network add on, from a programmers look it makes sense, and its perfectas now you can mark out the code you dont need, and thereby use less memory on the arduino Thanks! Given that I know very little about what I am doing I suspect that I will just leave it all in and hope that the compiler optimizes all of the unnecessary code away. Quote from: jerseyguy1996 on Sep 03, 2012, 12:46 amThanks! Given that I know very little about what I am doing I suspect that I will just leave it all in and hope that the compiler optimizes all of the unnecessary code away.Setting up a web server that receives commands from android os is a pretty deep task to get involved in IMHO. If you don't know the basic ethernet stuff your in for lots of learning and many hours of reading. :-)Should be fun! Good luck.
http://forum.arduino.cc/index.php?topic=121241.msg912246
CC-MAIN-2014-52
refinedweb
348
67.89
In this article, we will study what is hashmap and how we can solve the problem of the intersection of two arrays using a hashmap. Also, we will go through the examples explaining the problem and its source code in python along with the corresponding output. So, let’s get started! You can even watch the below video for a full explanation: What is HashMap? HashMap and Hashtable are the data structure store key/value. We can specify an object as a key and the value linked to that key using hashmap and hashtable. The key is then hashed, and the resulting hash code is used as the index at which the value is stored within the table. HashMap is non-synchronized. HashMap allows one null key and multiple null values. HashMap is generally preferred over HashTable if thread synchronization is not needed. Intersection of Two Arrays Problem Description: Given two integer arrays nums1[] and nums2[] of size m and n respectively. We need to find the intersection of two arrays. The result of the same is the list of distinct number which is present in both the arrays. The number of intersections can be in any order. For example: nums1: [10, 20, 30, 40, 50, 60] nums2: [10, 15, 60] result: [10, 60] nums1: [10, 20, 30, 10, 20] nums2: [10, 10, 15] result: [10, 10] Finding the intersection of two arrays using HashMap To find the intersection of an array using a hashmap, we will make a hash map of the first array so that our searching becomes faster and all elements appear only once. Now for the other elements of the other array, we will perform a search operation of the element in the hashmap. The algorithm of the intersection of the array using hashmap is as follow: - Build the hashmap for the nums1, that is, count the frequency of each element. - Traverse each element of nums2, one by one. - If the element is present in the map formed in step 1, reduce the frequency of the element by 1 and print the element, if the frequency becomes 0, remove the element from the map. - Repeat step 3 for all elements of nums2. Example1: nums1: [10, 15, 20, 25, 30] nums2: [10, 25] Creating the hashmap according to step 1: Now traversing the elements of the second array and reduce the frequency according to step 3 and insert the element in the result: Hence the result of the above is as follow: result: [10, 25] Example2: nums1: [10, 15, 20, 15, 10, 25] nums2: [65, 10, 25, 1] Creating the hashmap according to step 1: Increasing the frequency of 10 and 15 by one as its occurrence increases in nums1. Now traversing the elements of the second array and reduce the frequency according to step 3 and insert the element in the result: Hence the result of the above is as follow: result: [10, 25] Example3: nums1: [10, 10, 25, 15, 15, 15, 30, 35] nums2: [5, 6, 10, 9, 10, 15, 15] Creating the hashmap according to step 1: Increasing the frequency of the elements as the occurrence increases. Now traversing the elements of the second array and reduce the frequency according to step 3 and insert the element in the result: Hence the result of the above is as follow: result: [10, 10, 15, 15] Python Code for the intersection of two arrays The code for the intersection of the two arrays in python is as given follow: def intersection(nums1 , nums2): result = [] memo = {} for currentVal in nums1: if currentVal in memo: memo[currentVal] += 1 else: memo[currentVal] = 1 for currentVal in nums2: if currentVal in memo: result.append(currentVal) memo[currentVal] -= 1 if memo[currentVal] == 0: del memo[currentVal] return result nums1 = [10, 10, 25, 14, 14, 14, 56] nums2 = [10, 10, 14, 23, 34, 56] print(intersection(nums1, nums2)) The output of the above code is as given below: [10, 10, 14, 56] Complexity Analysis There are two loops to run to find the solution where the outer loop runs for m times and the inner loop runs n times. So, in the worst case, the time complexity is O(m*n). The space complexity for the worst case is O(1). Conclusion In the above article, we learned what a hashmap is and how to solve the intersection of two integer arrays using a hashmap. Also, we learned many examples along with the output and the python code for the intersection of arrays.
https://favtutor.com/blogs/intersection-of-two-arrays-using-hashmap
CC-MAIN-2022-05
refinedweb
751
53.24
For generations, mankind (and probably really smart dolphins) have styled their HTML content using CSS. Things were good. With CSS, you had a good separation between the content and the presentation. The selector syntax gave you a lot of flexibility in choosing which elements to style and which ones to skip. You couldn't even find too many issues to hate the whole cascading thing that CSS is all about. Well, don't tell React that. While React doesn't actively hate CSS, it has a different view when it comes to styling content. As we've seen so far, one of React's core ideas is to have our app's visual pieces be self-contained and reusable. That is why the HTML elements and the JavaScript that impacts them are in the same bucket we call a component. We got a taste of that in the previous article. What about how the HTML elements look (aka their styling)? Where should they go? You can probably guess where I am going with this. You can't have a self-contained piece of UI when the styling for it is defined somewhere else. That's why, React encourages you to specify how your elements look right along side the HTML and the JavaScript. In this tutorial, we'll learn all about this mysterious (and possibly scandalous!) approach for styling your content. Of course, we'll also look at how to use CSS as well. There is room for both approaches...even if React may sorta kinda not think so :P Onwards! OMG! A React Book Written by Kirupa?!! To kick your React skills up a few notches, everything you see here and more (with all its casual clarity!) is available in both paperback and digital editions.BUY ON AMAZON Displaying Some Vowels To learn how to style our React content, let's work together on a (totally sweet and exciting!) example that simply displays vowels on a page. First, you'll need a blank HTML page that will host our React content. If you don't have one, feel free to use the following markup: <!DOCTYPE html> <html> <head> <title>Styling in React</title> <script src=""></script> <script src=""></script> <script src=""></script> <style> #container { padding: 50px; background-color: #FFF; } </style> </head> <body> <div id="container"></div> </body> </html> All this markup does is load in our React and Babel libraries and specifies a div with an id value of container. To display the vowels, we're going to add some React-specific code. Just below the container div element, add the following: <script type="text/babel"> var Letter = React.createClass({ render: function() { return ( <div> {this.props.children} </div> ); } }); var destination = document.querySelector("#container"); ReactDOM.render( <div> <Letter>A</Letter> <Letter>E</Letter> <Letter>I</Letter> <Letter>O</Letter> <Letter>U</Letter> </div>, destination ); </script> From what we learned about Components earlier, nothing here should be a mystery. We create a component called Letter that is responsible for wrapping our vowels inside a div element. All of this is anchored in our HTML via a script tag whose type designates it as something Babel will know what to do with. If you preview your page, you'll see something boring that looks as follows: Don't worry, we'll make it look a little less boring in a few moments. After we've had a run at these letters, you will see something that looks more like the following: Our vowels will be wrapped in a yellow background, aligned horizontally, and sport a fancy monospace font. Let's look at how to do all of this in both CSS as well as React's new-fangled approach. Styling React Content Using CSS. Understand the Generated HTML Before you can use CSS, you need to first get a feel for what the HTML that React spits out is going to look. You can easily figure that out by looking the JSX defined inside the render methods. The parent render method is our ReactDOM based one, and it looks as follows: <div> <Letter>A</Letter> <Letter>E</Letter> <Letter>I</Letter> <Letter>O</Letter> <Letter>U</Letter> </div> We have our various Letter components wrapped inside a div. Nothing too exciting here. The render method inside our Letter component isn't that much different either: <div> {this.props.children} </div> As you can see, each individual vowel is wrapped inside its own set of div tags. If you had to play this all out (such as, previewing our example in a browser), the final DOM structure for our vowels looks like this: Ignore the data-reactroot attribute for now, but pay attention to everything else that you see. What we have is simply an HTML-ized expansion of the various JSX fragments we saw in the render method a few moments ago with our vowels nested inside a bunch of div elements. Just Style It Already! Once you understand the HTML arrangement of the things you want to style, the hard part is done. Now comes the fun and familiar part of defining style selectors and specifying the properties you want to set. To affect our inner div elements, add the following inside our style tag: div div div { padding: 10px; margin: 10px; background-color: #ffde00; color: #333; display: inline-block; font-family: monospace; font-size: 32px; text-align: center; } The div div div selector will ensure we style the right things. The end result will be our vowels styled to look exactly like we set out to when starting out. With that said, a style selector of div div div looks a bit odd, doesn't it? It is too generic. In apps with more than three div elements (which will be very common), you may end up styling the wrong things. It is at times like this where you will want to change the HTML that React generates to make our content more easily styleable. The way we are going to address this is by giving our inner div elements a class value of letter. Here is where JSX differs from HTML. Make the following highlighted change: var Letter = React.createClass({ render: function() { return ( <div className="letter"> {this.props.children} </div> ); } }); Notice that we designate the class value by using the className attribute instead of the class attribute. The reason has to do with the word class being a special keyword in JavaScript. If that doesn't make any sense why that is important, don't worry about it for now. We'll cover that later. Anyway, once you've given your div a className attribute value of letter, there is just one more thing to do. Modify the CSS selector to target our div elements more cleanly: .letter { padding: 10px; margin: 10px; background-color: #ffde00; color: #333; display: inline-block; font-family: monospace; font-size: 32px; text-align: center; } As you can see, using CSS is a perfectly viable way to style the content in your React-based apps. In the next section, we'll look at how to style our content using the approach preferred by React. Styling Content the React Way React favors an inline approach for styling content that doesn't use CSS. While that seems a bit strange at first, it is designed to help make your visuals more reusable. The goal is to have your components be little black boxes where everything related to how your UI looks and works gets stashed there. Let's see this for ourselves. Continuing our example from earlier, remove the .letter style rule. Once you have done this, your vowels will return to their unstyled state when you preview your app in the browser. For completeness, you should remove the className declaration from our Letter component's render function as well. There is no point having our markup contain things we won't be using. Right now, our Letter component is back to its original state: var Letter = React.createClass({ render: function() { return ( <div> {this.props.children} </div> ); } }); The way you specify styles inside your component is by defining an object whose content is the CSS properties and their values. Once you have that object, you assign that object to the JSX elements you wish to style by using the style attribute. This will make more sense once we perform these two steps ourselves, so let's apply all of this to style the output of our Letter component. Creating a Style Object Let's get right to it by defining our object that contains the styles we wish to apply: var Letter = React.createClass({ render: function() { var letterStyle = { padding: 10, margin: 10, backgroundColor: "#ffde00", color: "#333", display: "inline-block", fontFamily: "monospace", fontSize: "32", textAlign: "center" }; return ( <div> {this.props.children} </div> ); } }); We have an object called letterStyle, and the properties inside it are just CSS property names and their value. If you've never defined CSS properties in JavaScript before (ie, by setting object.style), the formula for converting them into something JavaScript-friendly is pretty simple: - Single word CSS properties (like padding, margin, color) remain unchanged - Multi-word CSS properties with a dash in them (like background-color, font-family, border-radius) are turned into one camel cased word with the dash removed and the first letter of the second word capitalized. For example, using our example properties, background-color would become backgroundColor, font-family would become fontFamily, and border-radius would become borderRadius. Our letterStyle object and its properties are pretty much a direct JavaScript translation of the .letter style rule we looked at a few moments ago. All that remains now is to assign this object to the element we wish to style. Actually Styling Our Content Now that we have our object containing the styles we wish to apply, the rest is very easy. Find the element we wish to apply the style on and set the style attribute to refer to that object. In our case, that will be the div element returned by our Letter component's render function. Take a look at the highlighted line to see how this is done for our example: var Letter = React.createClass({ render: function() { var letterStyle = { padding: 10, margin: 10, backgroundColor: "#ffde00", color: "#333", display: "inline-block", fontFamily: "monospace", fontSize: "32", textAlign: "center" }; return ( <div style={letterStyle}> {this.props.children} </div> ); } }); Our object is called letterStyle, so that is what we specify inside the curly brackets to let React know to evaluate the expression. That's all there is to it. Go ahead and run the example in the browser to ensure everything works properly and all of our vowels are properly styled. For some extra validation, if you inspect the styling applied to one of the vowels using your browser developer tool of choice, you'll see that the styles are in-fact applied inline: While this is no surprise, this might be difficult for those of us used to styles being inside style rules to swallow. As they say, the Times They Are A Changin'. You Can Omit the "px" Suffix When programmatically setting styles, it's a pain to deal with numbers that need a pixel value suffix. In order to generate these values, you need to do some string concatenation on your number to add a px. To convert from a pixel value back to a number, you need to parse out the px. All of this isn't extremely complicated or time consuming, but it is a distraction. To help with this, React allows you to omit the px suffix for a bunch of CSS properties. If you recall, our letterStyle object looks as follows: var letterStyle = { padding: 10, margin: 10, backgroundColor: "#ffde00", color: "#333", display: "inline-block", fontFamily: "monospace", fontSize: "32", textAlign: "center" }; Notice that for some of the properties with a numerical value such as padding, margin, and fontSize, we didn't specify the px suffix at all. That is because, at runtime, React will add the px suffix automatically. The only number-related properties React won't add a pixel suffix to automatically are the following properties: animationIterationCount, boxFlex, boxFlexGroup, boxOrdinalGroup, columnCount, fillOpacity, flex, flexGrow, flexPositive, flexShrink, flexNegative, flexOrder, fontWeight, lineClamp, lineHeight, opacity, order, orphans, stopOpacity, strokeDashoffset, strokeOpacity, strokeWidth, tabSize, widows, zIndex, and zoom. While I wish I could tell you that I walk around with this information memorized, I actually just referred to this article! Please hold your applause :P While pixel values are great for many things, you may want to use percentages, ems, vh, etc. to represent your values. For these non-pixel values, you still have to manually ensure the suffix is dealt with. React won't help you out there, so if you aren't a fan of pixel values, this nicety doesn't gain you much. Making the Background Color Customizable The last thing we are going to do before we wrap things up is take advantage of how React works with styles. By having our styles defined in the same vicinity as the JSX, we can make the various style values easily customizable by the parent (aka the consumer of the component). Let's see this in action. Right now, all of our vowels have a yellow background. Wouldn't it be cool if we could specify the background color as part of each Letter declaration? To do this, in our ReactDOM.render method, first add a bgcolor attribute and specify some colors as shown in the following highlighted lines: ReactDOM.render( <div> <Letter bgcolor="#58B3FF">A</Letter> <Letter bgcolor="#FF605F">E</Letter> <Letter bgcolor="#FFD52E">I</Letter> <Letter bgcolor="#49DD8E">O</Letter> <Letter bgcolor="#AE99FF">U</Letter> </div>, destination ); Next, we need to use this property. In our letterStyle object, set the value of backgroundColor to this.props.bgColor: var letterStyle = { padding: 10, margin: 10, backgroundColor: this.props.bgcolor, color: "#333", display: "inline-block", fontFamily: "monospace", fontSize: "32", textAlign: "center" }; This will ensure that our the backgroundColor value is inferred from what we set via the bgColor attribute as part of the Letter declaration. If you preview this in your browser, you will now see our same vowels sporting some totally sweet background colors: What we've just done is something that is going to be very hard to replicate using plain CSS. Now, as we start to look at components whose contents change based on state or user interaction, you'll see more such examples where the React way of styling things has a lot of good merit :P Conclusion As we dive further and learn more about React, you'll see several more cases where React does things quite differently than what we've been told is the correct way of doing things on the web. In this tutorial, we saw React promoting inline styles in JavaScript as a way to style content as opposed to using CSS style rules. Earlier, we looked at JSX and how the entirety of your UI can be declared in JavaScript using an XML-like syntax that sorta kinda looks like HTML. In all of these cases, if you look deeper beneath the surface, the reasons for why React diverges from conventional wisdom makes a lot of sense. Building apps with their very complex UI requirements requires a new way of solving them. HTML, CSS, and JavaScript techniques that probably made a lot of sense when dealing with web pages and documents may not be applicable in the web app world. With that said, you should pick and choose the techniques that make the most sense for your situation. While I am biased towards React's way of solving our UI development problems, I'll do my best to highlight alternate or conventional methods as well. Tying that back to what we saw here, using CSS style rules with your React content is totally OK as long as you made the decision knowing the things you gain as well as lose by doing!
https://www.kirupa.com/react/styling_in_react.htm
CC-MAIN-2017-09
refinedweb
2,663
61.16
.NET Networking APIs for UWP Apps Immo This post was written by Sidharth Nabar, Program Manager on the Windows networking team. (API reference on MSDN).. What’s New These are the new APIs and features that we have added into .NET Core 5 for UWP app developers. Systemproperty to 2.0 is not supported on other .NET platforms and will throw a System.ArgumentExceptionwhen trying to send such a request. The default version on .NET platforms other than UWP is 1.1. The Request.Versionproperty. What’s Changed. System.Net.Http In Windows 8.1, the implementation of HttpClient was based on a managed HTTP stack comprising of types such as System.Net.HttpWebRequest.Requests This library contains types related to System.Net.HttpWebRequest and System.Net.HttpWebResponse. What’s the same Other types from System.Net and System.Net.NetworkInformation namespaces that were supported for Windows 8.1 Store apps will continue to be supported for UWP apps. There have been some minor additions to this API surface, but no major changes in implementation. Looking Ahead Windows platform missing APIs uservoice or file an issue in GitHub. We look forward to working with you to deliver awesome apps to the entire breadth of Windows devices.
https://devblogs.microsoft.com/dotnet/net-networking-apis-for-uwp-apps/
CC-MAIN-2019-13
refinedweb
206
62.54
How to run a task every n seconds or periodically in Java Hey Everyone! In this article, we will learn how we can execute or perform a particular task in Java every n seconds or periodically using the Timer and TimerTask classes. Before directly jumping into the code, let us first understand about the TimerTask class used. TimerTask class TimerTask is an abstract class present in the java.util.TimerTask package in Java which implements the Runnable interface. As it is an abstract class it needs to be implemented and the run method of this class needs to be overridden. This class is used to define a task to run every n seconds or run periodically or even for a single time. This is done by creating an instance of the TimerTask class Timer Class The timer class is present in the java.util.Timer package in java. This class consists of a method which schedules a task for a specified period of time. Java Program to schedule a task periodically or for every n seconds package taskn; import java.util.Date; import java.util.Timer; import java.util.TimerTask; import java.util.logging.Level; import java.util.logging.Logger; public class Taskn extends TimerTask { public void run(){ System.out.println("Task scheduled ...executing now"); try { Thread.sleep(3000); } catch (InterruptedException ex) { System.out.println(ex); } System.out.println("Timer task Done "); } public static void main(String args[]) throws Exception{ TimerTask timerTask = new Taskn(); //reference created for TimerTask class Timer timer = new Timer(true); timer.scheduleAtFixedRate(timerTask, 0, 10); // 1.task 2.delay 3.period Thread.sleep(6000); timer.cancel(); } } Output:- Task scheduled ...executing now Timer task Done Task scheduled ...executing now Timer task Done I hope the above article was useful to you. Leave a comment down below for any doubts/suggestions. Also, read:
https://www.codespeedy.com/run-a-task-every-n-seconds-or-periodically-in-java/
CC-MAIN-2020-50
refinedweb
305
51.75
TL;DR I argue (without data) that you should decide when to end your A/B tests by considering for each variant the amount you stand to lose if you go with that variant. If you see an acceptable deal, then take it and end the test! Broadly speaking, the presented deals tend to get better as more samples are collected. I believe that what is "acceptable" should depend on extrinsic factors such as how long you've been running the test, the excitement factor of the next A/B tests in the queue, and the relative maintainability of the variants. The "amount you stand to lose" is the worst-case drop in conversion rate of the variant compared to the best of the other variants. I define "worst case" as there being a 95% probability that things are not this bad. Or 98%+ if you're conservative. As you collect more data, the "worst case" for each variant becomes more realistic. Once a variant's worst case is acceptable to you - either because even the worst case is a gain (great success!), or because it's a tolerably small loss and you've run out of patience (formerly known in sloppy circles as a null result) - you should end the test and go with that variant. Beware: if you don't implement the correct statistics, then the above recommendations can be disastrous! Introduction - Data at Freelancer At Freelancer.com we do a lot of A/B testing. We've made a lot of progress in the usability of our site (and averted the occasional backwards step!) by testing ideas against each other and seeing how real people react. Our customers vote with their feet, and we can find out whether a new form does indeed make it easier to post a project, or we can check that our new matchmaking algorithm is actually helping real people to find the most suitable freelancers. We're a fast-paced, data-driven company. In the last 8 hours we automatically generated over 3,000 graphs for our internal dashboard. Our data scientists take turns to comb through these and present the results in Daily Stats emails, which often set the discussion agenda for the day. Bayesian analysis of our A/B tests is a natural fit for us. We can check test results as often as we like. We can end tests early if there's a strong result and move on to the next test. Or we can keep a test running a little longer than planned in the hope of reaching a significant result. We can also directly answer questions like "What is the probability that variant B is better than variant A?". When you're making a business decision, this is a more directly useful question than "What is the probability that the two variants are equally effective and the observed results arose by chance?", which is the question usually asked in null hypothesis testing. But enough jabs in the Bayesian vs. (non-sequential) Frequentist debate! You can retread that debate in lots of places. The point of this article is to introduce a new (to me, and maybe to you) question to ask when analysing A/B tests. In this article I'm going to assume you're modeling a Bernouilli process, i.e. one where each sample either converts or does not convert. Your split test can involve two or more variants. First I recap the most common question evaluated by Bayesian A/B tests, and point out that it doesn't handle null results gracefully. Then I discuss a straightforward method of resolving this problem ("Lower your standards"). Then I take it one step further, to evaluate quantities I subjectively believe to be more "boss" and more useful for making business decisions ("Making deals"). Following the hand-wavy discussion, I give mathematical formulations of each of the questions ("The math"), then conclude with an outline of the numerical implementation ("The code"). Ask the right question It's super important to ask the right question of the data. Typically you formulate a single question in words, then you go about translating that question into maths and finally use it to ask the data for an answer. The Bayesian way is to ask as few separate questions as possible, and wherever possible to do comparisons and evaluations inside the probability clouds before finally extracting the answer as one small piece of information; rather than asking n questions, extracting n pieces of information that are detached from probabilities, then doing further processing on them to arrive at the final result. The most common question that split testers ask (including Google Experiments) seems to be, for each variant, "What is the probability that this variant is the best?" If you know that there is going to be a best variant, then this is a great question to ask. You keep collecting data until one variant has >95% probability of being the best (or >98%, or >99.9%, or >100 - eps%), at which point you call the test done and proclaim that variant the winner. But there isn't always going to be a single best variant. Most of the A/B tests I've been involved in have not ended with a clear winner. If it doesn't actually matter whether the button is deep blue (variant 1) or purple (control), then unless you're unlucky, if you ask this question you won't get to declare a winner. How and when should you give up on such a test? The problem is that the above question can't distinguish between two common cases: - One variant is better than the others, but not enough data has been collected to be sure of this (and we should keep running the test) - The two best variants are pretty much equally effective (and we should stop the test) Amusingly, this distinction is closely related to the null-hypothesis question that the standard frequentist methods try to answer, but let's not go there. Lower your standards One solution is simply to lower your standards before you start the test. Could you live with a conversion rate that's possibly 2% less than optimal - whatever optimal happens to be? How about a 5% worst case drop? C'mon, 5% will allow you to finish the test so much faster... Okay, okay, let's go with 2%. We can now run a handicapped race, and the question becomes: "What is the probability that either this variant is the best or is within 2% of the best?" or, equivalently, "If we dropped the other variants' conversion rates by 2%, then what would be the probability that this variant would be the best?" Now you start a test and collect data until one of the variants has >95% probability of being the best or within 2% of the best. This question lets us distinguish between the two cases. For case 2, unless you're unlucky, in the long run one or both of the best variants will reach >95% probability. If multiple variants reach >95% probability of being 'nearly the best', which should you pick? Well according to the test, and your standards, it doesn't really matter! Be aware that if you do lower your standards, then the consequence of being unlucky (which happens with 5% probability) are more severe than before. Therefore you may wish to increase the threshold from 95%. Making deals But by how much should you lower your standards? Ideally you'd like to discover that one of the variants is an outright winner - you don't want to accept a "possibly 2% worse" variant unless you have to. It may make me a sloppy decision maker, but my standards tend to start high while I'm excited and optimistic, then slowly drop as I get impatient with the experiment. (I didn't notice either of these effects on Wikipedia's list of cognitive biases!) Rather than fixing the handicap margin and calculating a probability, as above, why not fix the probability and calculate the handicap margin? Ask "How much of a conversion rate boost would we have to give this variant in order for it to have 95% probability of being the best?" Or equivalently, "In the worst case, how much do we stand to lose (or gain) if we go with this variant instead of the best of the rest. Here, 'worst case' means that there is 95% probability that the loss is not this bad." When the "boost" or the "worst case change" is zero for a variant, then there is 95% probability that that variant is the best. When the "worst case" is a gain of X%, then you're 95% sure that this variant is at least X% better than the best of the other variants - and you can either stop the test, or keep going and try to rack up bigger bragging rights. Note that we don't necessarily measure this change against the control's conversion rate - the control is just another variant. And as soon as you start the experiment, you will see that scrapping the test and returning to the control (which I'm assuming is the original version) involves a worst-case drop in conversion rate compared to the hypothetical best of the relatively-unknown variants. Let's consider some case studies. Here are some results from a test we ran: sample_size expectation_value worst_case_rel variation 1 1101 0.076294 -0.146379 variation 2 1074 0.068901 -0.299833 control 1092 0.061355 -0.385461 This is early in an A/B test, and as usual the worst cases are all negative (i.e. a drop). Note that the worst-case change is bigger than the expected change: for the control the expected change compared to the best of the rest (variation 1) is 0.061 / 0.076 - 1 = -20%, whereas the worst-case change is almost double at -39%. And for variation 1, which is in the lead, the expected gain over the next-best variant is 0.076 / 0.068 - 1 = +11%, whereas the worst-case change is still negative. The fact that the worst-cases are all so pessimistic indicates the high level of uncertainty, which is due to the small sample size. Should we stop the test now? No, not even if we were happy to cop a 14.6% drop in conversion rate - because looking at the population sizes we might not have enough samples to be able to trust the Bayesian methods yet. Here's some results from another test: sample_size expectation_value worst_case_rel variation 1 46119 0.165572 0.074664 control 51274 0.150349 -0.113701 We have a winner! The variant's worst case is a 7% increase in conversion rate. We expect it to be even higher: 0.166 / 0.15 - 1 = +10%. For maximum bragging rights, we could wait, collect more samples and hope that the 7.5% worst case increase approaches the expected 10% increase (and not vice-versa!), or we could stay humble and start the next test sooner. (Actually, in this particular case the difference in sample size between the variants was not intentional, and was due to a technical problem in which the variant was less likely to record failures. Oops. The need to detect circumstances like this is one of the reasons why we don't run Multi-Armed Bandits yet!) It's also possible to calculate the worst case as percentage points as opposed to a relative percent. I find the 'worst case' to be really useful when making decisions, and communicating results to decision makers. You can say things like "But if you end the test now, we could lose up to 10% of our conversion rate!", which I think is more evocative than "But there's only 80% probability that this is the best variant!". You can also say things like "Look, we've been running this test for a month now and have hundreds of thousands of samples for each variant. We still haven't found a winner but the worst case for this variant is a 5% drop.", which I think is a more satisfying answer than "We still haven't found a winner; two of the variants are neck and neck in terms of expected conversion rate but both have <60% probability of being the best", or "There's a 95% probability that the two best variants differ by less than our predefined margin of caring, so we can't distinguish between the two, so let's go with the control". Additionally, if you don't like the winner for some reason - maybe it involves code debt or the design clashes with the CEO's shirt - this method tells you how much you stand to lose by going with your favorite variant - both the expected loss and the worst-case loss. So if the MVP of the spiffy new redesign doesn't perform as well as the ugly-but-optimized original version, the expectation values alongside the 'worst case' numbers can help you decide whether you're willing to kill off the better performing original and lose a small amount of business while you optimize the new version. The math If you understand the maths behind the usual Bayesian methods that ask the originally posed question "What's the probability that this variant is the best?", then the subsequent questions follow pretty easily. If you don't understand the maths, it's very briefly described here - you only need to make it to Eq. 1! If you want more background on the mathematical properties, then these lecture notes may be helpful. OK, so having read that you're now comfortable integrating joint probability distributions (JPDs) to determine probabilities? Great. So to answer the original question, you calculate the probability that variant i is the best by integrating the JPD over the region $\mathcal{R}: x_i > max({x_j \forall j \neq i})$: P(i \text{ is the best}) = \int_{\mathcal{R}} P(\mathbf{x})~d\mathbf{x}, where x_i is the conversion rate of variant i, and x is a vector of all variants' conversion rates. If you want to implement a "threshold of caring" of c (e.g. c = 1% relative), then for each variant i you simply integrate the JPD over the region $\mathbf{R}: x_i > max({x_j \forall j \neq i}) * (1 + c)$: P(i \text{ is near enough}) = \int_{\mathcal{R}} P(\mathbf{x})~d\mathbf{x}. If you want to find the "worst case change", then you still want to integrate over the regions $\mathcal{R}: x_i > max({x_j \forall j \neq i}) * (1 + c_i)$ to find the value of c_i for which 0.95 = \int_{\mathcal{R}} P(\mathbf{x})~d\mathbf{x}. This sounds messy, but in practice it's very easy to compute numerically. Others' improvements on the original question My friend Thomas Levi developed a similar method to the "threshold of caring" (unpublished as of yet) for the control and a single variant. It gives you one of three answers each time you run it: 1) Stop the test and declare the winner 2) Stop the test and declare no result: the variant's conversion rate is within Y% either side of the control 3) Don't stop the test yet! I argue that 1) and 2) have similar consequences in terms of what you actually do - both of them call for you to stop the test and pick a version to push to 100% - so we may as well combine them. Ben Tilly also suggests moving from "We're confident that we're not wrong" to "We're confident that we didn't screw up too badly. (And hopefully we're right.)" - and provides a sequential frequentist scheme. Note that the "threshold of caring" in Chris Stucchio's approach (which inspired mine) does something different again. He computes the expectation value of the function max(-drop in conversion rate, 0): \int max(0, max({x_j \forall j != i}) - x_i) P(\mathbf{x})~d\mathbf{x}, or equivalently for the region $\mathcal{R}: x_i < max({x_j \forall j \neq i})$, \int_{\mathcal{R}} (max({x_j \forall j != i}) - x_i) P(\mathbf{x})~d\mathbf{x}. Essentially, he asks "Consider the expectation value for the difference in conversion rates between the variants. What is the contribution to that number from the region of the JPD where the underdog is better?" Or equivalently (I think?), "Given that you're making a mistake by choosing variant A, how much will that mistake cost you? Multiply this number by the probability that you did made a mistake by choosing variant A." I think that the expected cost of a mistake is an interesting quantity, and the probability of making a mistake is an interesting quantity, but I don't intuitively understand why their product is of fundamental importance on its own. I don't know whether it was chosen as an ad-hoc combination of the two factors or to draw on some deep result. The code If you have code that calculates $P(i \text{ is the best})$ by Monte-Carlo sampling, then it's very easy to extend it to calculate the other two quantities. Here we only discuss implementation for a Bernoulli model with a beta prior. $P(i \text{ is the best})$ may be calculated by constructing a beta distribution for each of the variants' conversion rates, then taking a large number of samples from each of the variants' distributions: from scipy.stats import beta def draw_monte_carlo(data, prior, num_draws=100000): """Construct and sample from variants' beta distributions. Use the data and the prior to construct a beta distribution for the conversion rate of each of the variants. Sample from these distributions many times and return the results. INPUTS: - data (pd.DataFrame): Each row is the collected data for a variant, giving the number of successes (conversions) and failures (non-conversions) - prior (pd.Series or dict): Parameters defining a beta prior. - num_draws: The number of draws to take from the distributions. The more, the merrier. RETURNS: - draws (pandas.DataFrame): Each column is a variant's monte carlo draws. """ draws = pd.DataFrame(columns=data.index) for variant_name, variant_data in data.iterrows(): # Calculate parameters for beta distribution # Here we assume you're using the same prior for all variants a = variant_data['successes'] + prior['successes'] + 1 b = variant_data['failures'] + prior['failures'] + 1 # Sample the beta distribution many times and store results draws[variant_name] = beta.rvs(a, b, size=num_draws) return draws Compare the nth sample from each distribution and note which variant won. For each variant i, the percentage of samples for which it wins is approximately $P(i \text{ is the best})$. def calc_prob_of_best(variant, draws): """Return the probability `variant` is the best. INPUTS: - variant (str): Name of the variant - draws (pandas.DataFrame): Each column is a variant's monte carlo draws. """ best_of_the_rest = draws.drop(variant, axis=1).max(axis=1) win = draws[variant] > best_of_the_rest return win.sum() / len(win) From here, calculating $P(i \text{ is near enough})$ for an *absolute percentage* is extremely easy - simply subtract c from each of variant i's samples before you do the comparison. Effectively you're introducing a simple handicap to the game. def calc_prob_of_near_enough(variant, draws, care_threshold): """Return the probability that `variant` is 'good enough'. INPUTS: - variant (str): Name of the variant - draws (pandas.DataFrame): Each column is a variant's monte carlo draws. - care_threshold (float): Defines 'good enough'. E.g. `care_threshold=-0.01` implies that a 1 percentage point drop is 'good enough'. """ best_of_the_rest = draws.drop(variant, axis=1).max(axis=1) win = draws[variant] - care_threshold > best_of_the_rest return win.sum() / len(win) Now you _could_ solve for c_i by numerically solving the above function for a value of c_i that gives $P(i \text{ is near enough}) = 0.95$, for each variant i. But there's a much easier way. Instead, just calculate the differences between i's sampled conversion rates and the best of the others, then find the 5th quantile: def calc_worst_case_abs(variant, draws, confidence=0.05): """Return the worst-case absolute conversion increase for `variant`, compared to the best of the other variants. INPUTS: - variant (str): Name of the variant - draws (pandas.DataFrame): Each column is a variant's monte carlo draws. - confidence (float): Defines the probability of the "worst case"; defaults to a 5% chance of the worst case eventuating. """ best_of_the_rest = draws.drop(variant, axis=1).max(axis=1) differences = draws[variant] - best_of_the_rest return differences.quantile(confidence) Finally, for the worst case relative change in conversion rate, def calc_worst_case_rel(self, variant, confidence=0.05): """Calculate the worst-case relative change in conversion rate. INPUTS: - variant (str): Name of the variant - draws (pandas.DataFrame): Each column is a variant's monte carlo draws. - confidence (float): Defines the probability of the "worst case"; defaults to a 5% chance of the worst case eventuating. """ best_of_the_rest = self.draws.drop(variant, axis=1).max(axis=1) rel_change = self.draws[variant] / best_of_the_rest return rel_change.quantile(confidence) - 1 Conclusion I have described a Bayesian technique to analyse A/B tests that I think is very useful to inform decision making. Unlike the most common Bayesian methods, my method can distinguish between a near-null result and a test with too-few samples. Acknowledgments Thanks to Thomas Levi, Matt Gibson, Carlos Pacheco, Shamindra Shrotriya, and Richard Weiss for useful debates and discussions. None of these people necessarily agree with its contents! About the Author Felix Lawrence is a Data Scientist in the Vancouver office of Freelancer.com. His hobbies include skiing, craft beer, and applied math.
https://www.tr.freelancer.com/community/articles/how-to-do-split-tests
CC-MAIN-2018-05
refinedweb
3,615
61.77
>>.'" Always be there (Score:5, Insightful) Re:Always be there (Score:4, Insightful) Dying...not hardly (Score:5, Insightful) What about desktop presence? (Score:5, Insightful) For performance-critical code there is no choice (Score:5, Insightful) Besides, measuring the popularity of a language by the size of its web presence is the worst kind of fallacious reasoning. That's a broken way to think of it (Score:5, Insightful):Always be there (Score:5, Insightful). ;-) C and C++ might die at different rates. (Score:5, Insightful) Statistics (Score:5, Insightful) Just like Fortan and COBOL! (Score:1, Insightful). Re:Visual Basic at #3? (Score:4, Insightful):so what? (Score:5, Insightful):not so.. (Score:5, Insightful) Re:Managed code is the way to go (Score:5, Insightful):Always be there (Score:5, Insightful)". There are two kinds of coders... (Score:4, Insightful):C++ is as good as C# _if_ used correctly. (Score:5, Insightful). The lower levels will always be there (Score:3, Insightful) Re:Managed code is the way to go (Score:2, Insightful):Always be there (Score:2, Insightful) Re:That's a broken way to think of it (Score:5, Insightful) Mind you, I don't think anything else is really set up for it either (Erlang?) but that's going to be the next big challenge. Re:Always be there (Score:5, Insightful) However, you get to create that facility s/get to/must/ Seriously, most people want to sit down and write the logic for their application, not invent (or even copy-paste) memory management schemes.. Re:so what? (Score:3, Insightful) Second, everyone makes mistakes. I don't care who you are, if you write 1 million lines of code, there's going to be a bug in there somewhere. Given enough bugs, there's going to be one you don't catch. Garbage collection takes away a class of bug and makes it so that even the very good programmers can write more stable code. There's a lot to be said for programmers getting taught better and applying those principles better, but in the end, taking away a class of bugs is going to be useful in the long run. Even with garbage collection it's possible to run into memory management problems, but it's a lot harder.? Re:Statistics (Score:2, Insightful) I absolutely agree. We have a disease in our industry, and it's that fast and cheap with under-experienced under-trained people is the way to go. There's a reason why OS's are not coded in concepts like Java - any programming language that needs to pause to "clean up after itself" needs some serious damn help - please code, audit your code properly instead of writing yet another piece of code because you refuse to properly #def, #undef and manage how you use memory. I'm sick and tired of the "Javoids" hanging up my web browsing, and fouling up real-time delivery on my trading floors. We are burning up serious quantities of electricity, and creating serious amounts of heat and feeding a bloated marketplace with this kind of inefficiency. We don't need faster processors, we need better code. You can make money with excellence - you just have to try instead of taking the easy road every time. Cray made a fortune being brilliant and keeping true to his engineering principles. Microsoft made billions by placing a pillow over the head of any and every innovation anyone with two brain cells in Redmond could think of would threaten their bloatware. Somewhere along the line the craftspeople left the building because IT managers were replaced with "Executrons" - these mindless folks who actually get compensated bonus for how much they "save" by cutting costs in IT - that's letting the fox in the hen-house. Thanks to this idiocracy, we now teach people not programming, but how to use Microsoft & Sun products in college for CS101, and to find any worthwhile thinking in a candidate I need to look for people from a foreign country. Thanks, Sun. Thanks, Microsoft. How about some real damn innovation, please, instead of these statistical analysis that make the status quo seem palatable.. Re:That's a broken way to think of it (Score:5, Insightful) Whatever it is, its compiler and low-level libraries will be written in C.? Re:not so.. (Score:4, Insightful):Managed code is the way to go (Score:1, Insightful) pimpl is not a way to code. Is a way to abuse a language feature in order to avoid a language issue.. Speed of light limits Internet speeds (Score:4, Insightful):Managed code is the way to go (Score:3, Insightful) More critical is that the grammar of C++ is undecidable. [yosefk.com] Re:Visual Basic at #3? , whilst Windows admins have spent the last 13 years ticking tickboxes and, lately, dragging objects. (does knowing win.ini and system.ini make me l33t or simply a dinosaur? I fled the Windows scene when 2003 came out...) Re:C/C++ is dying! (Score:3, Insightful) Re:Always be there (Score:2, Insightful) There are libraries out there available to just install and link to. But it certainly would be nice if some of this stuff got into the Standard C libs, so that all you needed was something like: #include <stdgcmem.h> ... and off you go on your merry way. The argument against would be that not everyone has the same needs from such a library, but it's a spurious one. Not everyone has the same needs from an I/O library, which is why there are a million alternatives to <stdio.h>, that doesn't mean you can't provide at least one standard library, and let those with other needs link to something else instead.:5, Insightful). Re:Always be there (Score:3, Insightful) No. They didn't. They implemented something else entirely, something that works outside the paradigm of what you're doing, attempting to track memory use by indirect means (such as counting references.) And this is precisely the problem with such memory management; by taking the programmer out of the management loop and abstracting it into irrelevance, the programmer no longer has either the means or the incentive to keep tight control of resources. The mindset leads to thinking nothing of bringing the entire Python interpreter into memory in order to evaluate a line or two of trivial logic, because its trivially easy to do. Consequently, huge system loads are incurred for relatively small tasks. Write that same logic in C, and you have a dedicated executable that (a) is tiny by comparison, (b) runs orders of magnitude faster, (c) you actually understand on every level. Presuming you don't drag in some huge library you didn't really need or do a really bad job. :-) Don't get me wrong - I am *not* a Python hater, in fact, it is one of my favorite languages. But I don't use it for everything just because it is easy. I always think about resources used, whose resources they are, whether I have the implicit right to consume them if I don't have to, and if I do have such a right, do I *want* to consume them? After all, they may be mine, and if I'm trying to do something else, some interpreter clumping around in the background consuming large chunks of resources may not be in my best interests. If it is true for me, it's probably true for my customers, so in the end, I have to do the same evaluation for them as well. those other languages are MADE with C (Score:2, Insightful) 2. C........14.7% - duh 3. VB.......11.6% - who cares 4. PHP......10.3% - written in C++ 5. C++.......9.9% - duh 6. Perl......5.9% - written in C 7. Python....4.5% - written in C 8. C#........3.8% - who cares 9. Ruby......2.9% - written in C 10. Delphi...2.7% - no idea D programming language (Score:3, Insightful). And what are all those other languages written in? (Score:3, Insightful) Until a language comes along that can outperform C or C++, there will always be a use for them. It's still right-tool-for-the-job. I don't use Ruby to write audio DSP plugins, and I don't use C/C++ to code a web application. I'll keep both in my tool box, along with lisp,. Since you're not using that brain... (Score:1, Insightful) To spoil the joke, but to explain it to you: 1) The speed of the internet has fuckall to do with the programming language used to code the processing parts of it. 2) A hard drive connected to a terabit internet won't do jack shit; -something- at your end needs to receive and interpret the stuff coming down the tubes. Or do you think magic fairies will do that? 3) Even a hard drive has software inside it, which will still need to be programmed, which means some language will be in use. You're getting flamed because your statement makes about as much sense as: "Because TV will be digital next year, everybody will wear lederhosen with rabid gerbils stuffed down the front." Re:Managed code is the way to go (Score:4, Insightful) If only the standards committee could get off its arse and progress as quickly as BOOST does.... Re:Statistics (Score:3, Insightful) Forth is interpreted.:Always be there (Score:3, Insightful) As a VB.NET programmer building business automation apps for a living, I can't imagine building a (G)UI in a LLL. Not that I wouldn't appreciate the exercise, but the demands of the business environment won't allow it. Not just for the initial build but for the inevitable stream of change requests that will follow. Drag/Drop/Done is the name of the game. But as a hobbyist microcontroller programmer, well, there's no such thing as bloat in that space - you can't do it! If I was writing some image manipulation software, all the actual processing would most certainly be in C if not straight assembly for the very most critical parts. But the Load/Save/View/whatever parts, I'll do in VB! What about for embedded applications? (Score:3, Insightful) I'm sure Java ranks high there, too, but I don't consider it to be in the same class. Native Java hardware is relatively expensive, and running a VM takes a significant amount of memory and processing power. My latest project is pure C (aside from about 100 lines of assembly for a firmware upgrade bootloader), around 30 pages of source code at present, and it compiles to about 9k of object code. It's targeted for a $2.50 processor, and I'm able to do things like simultaneous Bell 202 and DTMF decoding in software because I know exactly how C arithmetic is implemented on the processor and can take advantage of that without having to actually do the implementation in assembler. Doing the same thing in Java would cost a lot more. And when $5 saved on the bill of materials means an extra $5-10k in my bank account at the end of the year, that's a big deal. So what other languages can compete in that space? Re:C/C++ is dying! (Score:2, Insightful) Re:C/C++ is dying! (Score:4, Insightful) Niche compared to others (Score:3, Insightful) So it could also be considered a niche on the scale of all the situation at which you can throw the other languages. C could be found on anything electronic that can run code between small embed micro-controllers up to package running on huge mainframes or cluster. It's not bad, its very useful indeed. Just hasn't seen as many different usages as the other languages yet. Re:Always be there (Score:1, Insightful) You seem to be suffering from Not-Invented-Here syndrome. Sure, all of the theoretical advantages you cite are real, but look at all the time you end up spending developing this framework. You say it's a small, one-time cost... but what if your application requirements change? What if you need to keep up with changing technology (what if you suddenly want to adapt your code to be cache-aware? distributed network shared memory? multiprocessor safe? something that hasn't even been thought of yet?) What if there are subtle memory model bugs you haven't anticipated because you're not an expert in machine architecture X? The big advantage of shoving all this work off to a third party, even if it's a suboptimal solution, is that they get to worry about all that stuff. Furthermore, if it's fundamental to the language, you can be assured all the libraries you might want to use will also take advantage of it. That's unlikely in the case of a framework you roll yourself. It's the open source model of sharing, instead of re-inventing the wheel. While I'm sure it's a very beautiful thing to build all this infrastructure, 99.9% of the time it's also completely unnecessary for you, personally, to do it. This is where languages like C/C++ fall down. (To be honest: I code lots of infrastructure-type stuff in C, too. Systems programming, we used to call it. But I do it as a hobby, not to get stuff done.):C/C++ is dying! (Score:2, Insightful):1, Insightful) Re:C/C++ is dying! (Score:3, Insightful) Everyone benefits from not having to reload all the sidebars, etc. on a page when they click a link. Not if they're writing the firmware for a washing machine, the operating system for a telephone switching system, the back-end of a corporate database application, the latest FPS blockbuster, the drivers for a new Linux file system... Dubious methodology (Score:3, Insightful) Web presence doesn't equal much; it certainly doesn't equate to popularity. Nor do these numbers bear much resemblance to the mix of programming openings I see on job boards. C is number two? Really? Or are they just counting the number of times C shows up in the meaningless expression C/C++ ? Outside of the DSP and embedded devices niche, the appearance of "C/C++" in a job listing means they're looking for a C++ programmer, and it's generally followed by a list of C++ APIs that the successful candidate will be familiar with. And please, C fans, keep your flames low. C is my favorite language, but if it was really the second most popular programming language, I wouldn't spend eight to ten hours a day programming in C++ and PHP. Anyway, the bit about the lack of garbage collection in C++ is a crock. There are a number of easy to learn and use GC libraries available for C++, and a number of them can be used in most cases with little to no code changes simply by linking them in. If the popularity of C++ is declining over GC, it's because people have gotten too lazy to type "c++ garbage collector" into Google. There are plenty of reasons to dislike C++, but that's just not one of them. Re:It is somewhat and it's the reason I left CS (Score:3, Insightful) In this day and age, maybe its inefficient, maybe its not quite CS, maybe its not "cool", but it gets the job done, it saves money, it makes things work, its enabling. Its still not good enough in its current state, but its going in the right direction. We're leaving the "how" behind, and changing it with the "what". People have ideas, and they make those ideas a reality, and it "works". It may not be as badass to implement an enterprise management system, as opposed to the first ethernet network, but you don't spend 3 years on a project to be left empty ended 90% of the time either. C++ is easy (Score:3, Insightful) Re:C/C++ is dying! (Score:2, Insightful) How is a scripting language not a programmer's tool? And how, in that case, does PHP not have the same caveat attached, since (in capability) it resembles a sped-up but limited subset of Perl?:1, Insightful) The Whitespace [wikipedia.org] language is also Turing complete, as is LOLCODE [wikipedia.org]. You CAN solve any solvable problem with either of them. The comparison tdoesn't prove anything one way or the other about VB-- but neither does it Necessarily follow from the fact that it is "Turing complete" that it "isn't an inferior way of doing it" (I know I wouldn't want to get stuck with the job of maintaining, editing, or debugging someone elses Whitespace program... Re:C/C++ is dying! (Score:3, Insightful) For most apps of any size, having a 'separate' GUI is no bad thing. It encourages you to simplify the back end processing and keep if efficient and easy to understand, with a limited number of hooks for a GUI to hook into. The stuff I write at work has multiple user interfaces possible. Little has to change to swap our Windows only C++ / Visual Studio GUI to a multi-platform web based GUI. Re:C/C++ is dying! (Score:3, Insightful) Conceptually that's exactly how it should be, decouple the engine from the display as much as possible. Many enterprise projects learnt this lesson the hard way when they spread the engine across a multitude of MFC dialogs and widgets under Win31, many of the same enterprises are still learning the same lesson with web apps. Re:Always be there (Score:3, Insightful) Good on them, but I believe the right tool should be used for the job. Many very high level tools such as UML get overused these days. Thats a shame and it is good that people are working to keep their assembler skills alive. TIOBE Index is biased and faulty (Score:2, Insightful)
http://developers.slashdot.org/story/08/04/24/1955257/are-c-and-c-losing-ground/insightful-comments
CC-MAIN-2014-41
refinedweb
3,051
71.24
Developers & Practitioners Implementing leader election on Google Cloud Storage Leader. Why do we need distributed locks?. Use cases for leader election. Why is distributed locking difficult?). Cheating a bit: Leveraging other storage primitives. Example: Leader election with Cloud Storage: import ( "context" log "github.com/hashicorp/go-hclog" "github.com/hashicorp/vault/physical/gcs" "github.com/hashicorp/vault/sdk/physical" ) const ( bucketName = "YOUR_GCS_BUCKET_NAME" leadershipFile = "leader.txt" ) func main() { logger := log.Default() b, err := gcs.NewBackend(map[string]string{ "bucket": bucketName, "ha_enabled": "true", }, logger) if err != nil { panic(err) } haBackend, ok := b.(physical.HABackend) if !ok { panic("type casting failed") } ctx, cancel := context.WithCancel(context.Background()) defer cancel() for { lock, err := haBackend.LockWith(leadershipFile, "ignored") if err != nil { panic(err) } logger.Info("running for LEADERSHIP") doneCh, err := lock.Lock(ctx.Done()) if err != nil { panic(err) } logger.Info("elected as LEADER") <-doneCh logger.Info("lost LEADERSHIP") if err := lock.Unlock(); err != nil { panic(err) } } } This example program creates a lock using a file in Cloud Storage, and continually runs for election. In this example, the Lock() call blocks until the calling program becomes a leader (or the context is cancelled). This call may block indefinitely since there might be another leader in the system. If a process is elected as the leader, the library periodically sends heartbeats keeping the lock active. The leader then must finish work and give up the lock by calling the Unlock() method. If the leader loses the leadership, the doneCh channel will receive a message and the process can tell that it has lost the lock, as there might be a new leader. Fortunately for us, the library we're using implements a heartbeat mechanism to ensure the elected leader remains available and active. If the elected leader fails abruptly without giving up the lock, after the TTL (time-to-live) on the lock expires, the remaining nodes then select a new leader, ensuring the overall system's availability. Fortunately, this library implements the mentioned details around sending so-called periodic heartbeats, or how frequently the followers should check if the leader has died and if they should run for election. Similarly, the library employs various optimizations via storing the leadership data in object metadata instead of object contents, which is costlier to read frequently. If you need to ensure coordination between your nodes, using leader election in your distributed systems can help you safely achieve there's at most one node that has this responsibility. Using Cloud Storage or other strongly consistent systems, you can implement your own leader election. However, make sure you are aware of all the corner cases before implementing a new such library. Further reading: - Implementing leader election using Kubernetes API - Leader election in distributed systems - AWS Builders Library - Leader election - Azure Design Patterns Library - Kubernetes Leader Election on Kubernetes Podcast Thanks to Seth Vargo for reading drafts of this article. You can follow me on Twitter.
https://cloud.google.com/blog/topics/developers-practitioners/implementing-leader-election-google-cloud-storage?utm_campaign=SF%20Data%20Weekly&utm_medium=email&utm_source=Revue%20newsletter
CC-MAIN-2022-33
refinedweb
485
50.33
How to convert a Char to String C++ Input: char s[] = { 'n', 'e', 'd', 'c', 'o', 'd'} ; Output: string s = "nedcod" ; Input: char s[] = { 'c', 'o', 'd', 'e', 'r' } ; Output: string s = "coder" ; // CPP program to get a Char to String C++ #include<bits/stdc++.h> #include<iostream> using namespace std; string convertToString(char* a, int size) { int i; string s = ""; for (i = 0; i < size; i++) { s = s + a[i]; } return s; } int main() { char a[] = { 'C', 'O', 'D', 'E', 'R' }; char b[] = "nedcod"; int a_size = sizeof(a) / sizeof(char); int b_size = sizeof(b) / sizeof(char); string s_a = convertToString(a, a_size); string s_b = convertToString(b, b_size); cout << s_a << endl; cout << s_b << endl; return 0; } Output: CODER nedcod Hi, a specific part of a string we use the erase. This brings us to the end of this video tutorial, thank you for watching please leave us your likes and comments in the comments section. Char Array to String C++ char arr[ ] = "This is a test"; string str(arr); // You can also assign directly to a string. str = "This is another string"; // or str = arr; Using STL then do this: string firstLetter(1,str[0]); 1 Char to String C++ #include <sstream> #include <string> std::string s(1, c); std::cout << s << std::endl; and std::cout << std::string(1, c) << std::endl; and std::string s; s.push_back(c); std::cout << s << std::endl; Append Char to String C++ Use += operator instead of named functions. int main() { char d[1] = { 'd' }; std::string y("Hello worl"); y.append(d); std::cout << y; return 0; } Can i Convert Char to String C++ include include int main() { char c = ‘A’; // using string class fill constructor std::string s(1, c); std::cout << s << '\n'; return 0; } Const Char* to String c++ std::stringhas a constructor from const char *.This means that it is legal to write: const char* str="hello"; std::string s = str; Cannot Convert From Char to String C++ you change from a string * that you no longer need to dereference it in your cout: if (cool){ for (int i=0; i<counter; i++) cout << str << endl; } else { cout << str << endl; }
https://epratap.com/char-to-string-cpp/
CC-MAIN-2021-10
refinedweb
361
61.03
sync(2) [netbsd man page] SYNC(2) BSD descriptor written out. In NetBSD, sync() does not return until all buffers have been written. BSD March 25, 2009 BSD sync(2) System Calls sync(2) NAME sync - update super block SYNOPSIS #include <unistd.h> void sync(void); DESCRIPTION The sync() function writes all information in memory that should be on disk, including modified super blocks, modified inodes, and delayed block I/O. Unlike fsync(3C), which completes the writing before it returns, sync() schedules but does not necessarily complete the writing before returning. USAGE The sync() function should be used by applications that examine a file system, such as fsck(1M), and df(1M), and is mandatory before rebooting. ATTRIBUTES See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Interface Stability |Standard | +-----------------------------+-----------------------------+ SEE ALSO df(1M), fsck(1M), fsync(3C), attributes(5), standards(5) SunOS 5.11 5 Jul 1990 sync(2)
https://www.unix.com/man-page/netbsd/2/sync/
CC-MAIN-2021-49
refinedweb
155
54.12
In this section you will get detail about charAt() in java. This method comes in java.lang.String package. charAt() return the character at the given index within the string, index starting from 0. This method return the character at the specified index in the string. Syntax: public char charAt(int index) index - index of the character to be returned. Example: A program how to use charAt() method in java. import java.lang.*; class Demo { public static void main(String args[]) { String s="welcome to rose india"; char pos =s.charAt(3); // return the character at position 3rd System.out.println("character at 3rd position is = "+pos); } } Output : After compiling and executing the above program. If you enjoyed this post then why not add us on Google+? Add us to your Circles Liked it! Share this Tutorial Discuss: charAt() method in java Post your Comment
http://www.roseindia.net/java/beginners/java-charat-method.shtml
CC-MAIN-2014-52
refinedweb
145
68.57
Error clahe opencv 3.0 undefined reference to symbol Hello. Before ask here I tried... but didn't solve the problem. This is the code i'm using: '#include "opencv2/imgcodecs.hpp" '#include "opencv2/opencv.hpp " using namespace cv; using namespace std; using cv::CLAHE; int main() { Mat m= imread("teste.png"); imshow("lena_GRAYSCALE",m); Ptr<clahe> clahe = createCLAHE(); clahe->setClipLimit(4); Mat dst; clahe->apply(m,dst); imshow("lena_CLAHE",dst); waitKey(); } The error: g++ -L/usr/local/lib -o "ClaheTeste1" ./main.o -lopencv_imgcodecs -lopencv_highgui -lopencv_core /usr/bin/ld: ./main.o: undefined reference to symbol '_ZN2cv11createCLAHEEdNS_5Size_IiEE' //usr/local/lib/libopencv_imgproc.so.3.0: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status make: * [ClaheTeste1] Error 1 Any idea? read the error again, closely. it complains about missing opencv_imgproc. you have to add that to your linker cmdline oh, and btw, to format code , use the 10101button, not the ""button. (this made it look, like a case problem, which is not there in your original code) oh, and btw, to format code here , use the 10101button, not the ""button. (this made it look, like a case problem, which is not there in your original code) Thank you berak!! I'm using < , but to paste here I put ".. You were sure about missing opencv_imgproc
https://answers.opencv.org/question/86443/error-clahe-opencv-30-undefined-reference-to-symbol/
CC-MAIN-2019-47
refinedweb
218
61.12
Back to: C#.NET Programs and Algorithms Buzz Number Program in C# with Examples In this article, I am going to discuss Buzz Number Program in C# with Examples. Please read our previous article where we discussed the Twisted Prime Number in C#. Here, in this article, first, we will learn what is a Buzz Number and then we will see how to implement the Buzz Number Program in C#. And finally, we will see how to print all the Buzz numbers between a range of numbers like between 1 to 100 or 100 to 1000, etc. What is a Buzz Number? A number is said to be Buzz Number if it ends with 7 OR is divisible by 7. The task is to check whether the given number is a buzz number or not. Or, for being a buzz number there are two conditions either of which must be true. - The number should end with digit 7 e.g., 27, 657, etc. - The number should be divisible by 7 e.g., 63, 49, etc. Examples: Input: 63 Output: Buzz Number Explanation: 63 is divisible by 7, one of the conditions is satisfied. Input: 72 Output: Not a Buzz Number Explanation: 72 is neither divisible by 7 nor it ends with 7, so it is not a Buzz Number. Input: 27 Output: Buzz Number Explanation: 27 ends with 7, one of the conditions is satisfied. To implement the Buzz Number, we are going to use the below two operators: – Modulus %: Modulus Operator will calculate the remainder after an integer division. Example: 97 % 10 = 7 43 % 10 = 3 Divides /: Divides numerator by de-numerator Example: 97 / 10 = 9 43 / 10 = 4 How to Implement Buzz Number Program in C#? We need to follow the below approach to implementing the Buzz Number Program in C#. - Input the number to check for the condition - Check whether the number is ending with digit 7 or divisible by 7 - If the condition holds true print it’s a buzz number - If the condition doesn’t hold true print it’s not a buzz number Example: Buzz Number in C# using System; public class BuzzNumberProgram { public static void Main () { Console.WriteLine ("Enter a number"); int number = Convert.ToInt32 (Console.ReadLine ()); if (number % 10 == 7 || number % 7 == 0) Console.WriteLine ("Buzz Number"); else Console.WriteLine ("Not a Buzz Number"); } } Output Time Complexity: O(1) as there are atomic statements and no loops. Buzz Number in between from 1 to 100 The following C# Program prints all the Buzz numbers between 1 to 100. As we know 1, 2, 3, 4, 5, and 6 are not Buzz Numbers, so, we started the for loop directly from 7. using System; public class BuzzNumberProgram { public static void Main () { Console.WriteLine ("Buzz Number Brom 1 to 100 :"); for (int number = 7; number <= 100; number++) { if (number % 10 == 7 || number % 7 == 0) Console.Write (number + " "); } } } Output: ’ In the next article, I am going to discuss How to Implement the Strong Number Program in C#. Here, in this article, I try to explain How to Implement the Buzz Number Program in C# with Examples and I hope you enjoy this Buzz Number Program in the C# article.
https://dotnettutorials.net/lesson/buzz-number-csharp/
CC-MAIN-2022-21
refinedweb
534
62.38
And that's where a lot of readers get into trouble. Some of them send me email. They often express frustration, because they are trying to learn Python, or Bayesian Statistics, or Digital Signal Processing. They are not interested in installing software, cloning repositories, or setting the Python search path! I am very sympathetic to these reactions. And in one sense, their frustration is completely justified: it should not be as hard as it is to download a program and run it. But sometimes their frustration is misdirected. Sometimes they blame Python, and sometimes they blame me. And that's not entirely fair. Let me explain what I think the problems are, and then I'll suggest some solutions (or maybe just workarounds).. Well, what can we do about that? Here are a few options (which I have given clever names): 1) Back to the future: One option is to create computers, like my Commodore 64, that break down the barrier between using and programming a computer. Part of the motivation for the Raspberry Pi, according to Eben Upton, is to re-create the kind of environment that turns users into programmers. 2) Face the pain: Another option is to teach students how to set up and use a software development environment before they start programming (or at the same time). 3) Delay the pain: A third option is to use cloud resources to let students start programming right away, and postpone creating their own environments. In one of my classes, we face the pain; students learn to use the UNIX command line interface at the same time they are learning C. But the students in that class already know how to program, and they have live instructors to help out. For beginners, and especially for people working on their own, I recommend delaying the pain. Here are some of the tools I have used: 1) Interactive tutorials that run code in a browser, like this adaptation of How To Think Like a Computer Scientist; 2) Entire development environments that run in a browser, like PythonAnywhere; and 3) Virtual machines that contain complete development environments, which users can download and run (providing that they have, or can install, the software that runs the virtual machine). 4) Services like Binder that run development environments on remote servers, allowing users to connect using browsers. On various projects of mine, I have used all of these tools. In addition to the interactive version of "How To Think...", there is also this interactive version of Think Java, adapted and hosted by Trinket. In Think Python, I encourage readers to use PythonAnywhere for at least the first four chapters, and then I provide instructions for making the transition to a local installation. I have used virtual machines for some of my classes in the past, but recently I have used more online services, like this notebook from Think DSP, hosted by O'Reilly Media. And the repositories for all of my books are set up to run under Binder. These options help people get started, but they have limitations. Sooner or later, students will want or need to install a development environment on their own computers. But if we separate learning to program from learning to install software, their chances of success are higher. UPDATE: Nick Coghlan suggests a fourth option, which I might call Embrace the Future: Maybe beginners can start with cloud-based development environments, and stay there. UPDATE: Nick Coghlan suggests a fourth option, which I might call Embrace the Future: Maybe beginners can start with cloud-based development environments, and stay there. UPDATE: Thank you for all the great comments! My general policy is that I will publish a comment if it is on topic, coherent, and civil. I might not publish a comment if it seems too much like an ad for a product or service. If you submitted a comment and I did not publish it, please consider submitting a revision. I really appreciate the wide range of opinion in the comments so far. Great post Allen! My first computer was a VIC-20 (I later got a Commodore 64), where you also start with a BASIC prompt. I too have been thinking about to the barriers to starting to code. Last fall, I helped in my son's grade 8 class to teach them python programming. There we wanted the students to be able to start coding with minimal effort, and minimal magic. They already had MacBook Airs, which had python 2.7 already installed, so there was nothing to install. Then we concentrated on a few core programming constructs (no classes for example), and no libraries. This was an attempt to recreate the environment I had when I learnt to program on the VIC-20. I think it worked really well. The students were able to write fairly advanced (a few pages full of code) with that set-up. If you want to do more advanced programming, you'll need to install more, but for a start it was really good. I've written more about it here: Thanks, Henrik!. a pi runs linux. I love Linux but asking a user to setup a pi is like asking them to install IDEs x 5 if you want an a c64 experience check out Pico-8 That tweet about dinosaurs tho. I would kinda call those people "sane" instead. Forcing developers to use the slow, unreliable cloud-shit for anything productive is sooo far-fetched at this point, I'd argue to fire anyone who does that professionally without an incredibly sound reason. Hi Allen, This is great. I used a similar tactic with JSFiddle when teaching 5th grader's procedural art in JavaScript, so that they would get exposure to all three web langs without having to muck around in a text editor or file system (they are really wiggly, hard enough to get them to think about the syntax). I personally only overcame this barrier as a programmer without a CS degree by moving to a city with other devs on modern stacks and going to meetups and having them help me through CORS errors in chrome console, which were baffling and not in any of the tutorials, etc. I wish things like which are more holistic had been around when I started. P.S., we met at EdFoo where I embarrassingly told you during a conversation about this wonderful book called Think Bayes that had this wonderful educational paradigm... oops. Hi Colin! Good to hear from you, and thanks for this comment. I couldn't disagree more. With the advent of free online courses like khan academy, resources like stackoverflow and (obviously) the world wide web and powerful search engines, not to mention error-highlighting, auto-completing, easy-to-navigate IDEs, it is massively easier, cheaper and more approachable these days than back when everyone was figuring out how to write GOTO statements on C64s and ZX Spectrums. If you are incapable of master the skills required to download and double-click an installer for something like Eclipse, you're hardly going to develop a keen interest in programming, are you? Sorry, but although it is well written, this whole article is total bunk. I click on "Software" to install software. Software sits there with no indication about what it is doing, eventually a pile of icons appear that I have no interest in. Click in the search box, and type: Ecliple (type) El (typo) Eclipse because some dude on the internet said "something like Eclipse." Eclipse Integrated Development Environment version 3.8.1-8, and reading down the oddly sorted comments most say OLD VERSION. A Jan 14, 2018 comment says "Good, but out of date -- This is sooo old, I can easily get the new version from the website." Close Software, open Chromium, search for Eclipse, find Eclipse (software) on Wikipedia, click the official website at the bottom of the article: Click the large orange DOWNLOAD button, eclipse.org/downloads click the large orange DOWNLOAD 64 BIT button, click the large DOWNLOAD button to Download from somewhere, now it downloads, open the download. Now I am in Archive Manager, I see a folder named eclipse-installer and an "Extract" button, click "Extract", choose a folder to extract it to (or just dump it in Downloads), Extraction completed successfully -- Close (default), Quit, Show the Files. I click "Show the Files" I am in my Downloads folder in a File Manager, double click "eclipse-installer", and now there are many folders and icons, there is a gear box named "eclipse-inst", maybe I'll read the readme first, the readme is a folder, double click readme folder, double click readme_eclipse.html A huge outline of links appear with the title Eclipse Project Release Notes, Release 4.7.0 (Oxygen) Last revised June 28, 2017. Ctrl+F is usually our friend, Find "install" 2. General - Startup > 1. Installation/Configuration issues that can cause Eclipse to fail start, that doesn't even make any sense grammatically. Close the readme, go up a directory in the File Manager, and double click the gear box. Eclipse Installer A Java Runtime Environment (JRE) or Java Development Kit (JDK) must be available in order to run Eclipse Installer. No Java virtual machine was found after searching the following locations: /home/user/Downloads/eclipse-installer.jre/bin/java java in your current PATH, Close You make some very good points, but I liked the article, so I'd like to present a different side to that coin. I started with computers before I even had web access. Back then, computers were built almost solely to program, to the point where you couldn't do much without 'programming'. When I had a problem, I had to read dense, horribly written manuals. When that failed, I had to ask one of two people I knew who programmed for help. And, if they couldn't help me, I needed to either abandon the project, brute force my way through the error, or try to do it in a different (hopefully better documented) way. It was very frustrating, but I used to get high from solving problems. Today, programming has done a complete 180. If you have good Google skills, you can solve just about any problem in moments. There are tons of resources out there. It should be so easy. But, there are a couple of problems. The first is that computers are now designed to hide programming from their users. And the second (which the article didn't mention) is that it's too easy to solve a problem by copy/pasting some code from Stack Overflow. You've all heard of test driven development, but now we have an era of Stack Overflow Driven Development where you copy, paste and pray. So, on one hand it's harder to get started today, but it's easier to find answers. Because it's easier to find answers, I question whether new developers really understand how to troubleshoot. Back in my day, it was easy to get started. Or, more accurately, it was almost impossible not to start. But, it was very hard to get answers. If I had to choose one, I think I'd pick the old days. Though, it's also possible/likely that I'm just 40 and nostalgic for my youth!! :) It's not even harder to get started, unless by "getting started" you literally mean number of steps required to be able to start typing a program in, which unless hardlyyou have serious ADD hardly counts. For me "getting started" means how far you will have got after your first session attempting to write some code - maybe after an hour or two. Codecademy, etc. have interactive zero-install web-based lessons. Where were those on my ZX Spectrum back in 1984? Sinclair User magazine (although hardly interactive) and if you got stuck you were on your own. Eclipse has an easy-to-use, live-edit-capable, point and click debugger? Where was that on your C64? Honestly, you lot really are sounding like old farts who can't see the extent of the rose-tint affecting your spectacles for all the nostalgia clouding your vision. :-) In common with most things to do with science and technology, we have it better than we ever have before. Unsurprisingly. You are right that It is easier to learn to code/program now than ever before, but that’s not what Allen’s post is about. It’s about “the barrier between using a computer and programming a computer is getting higher.” I don’t have a SC degree but managed to land a help desk position 2 years ago based on my history in customer service and additional work experience. I previously was in manual labor and then sales, with a BA in Fine Arts. This article summed up my stuggle well. Beside on the job experiences I studied up a lot to get the core 3 CompTIA certs, and now devote personal time to LinuxOS and servers / Ruby on Rails / SQL reporting. I am now finally staring to get through that high barrier. Yes, now that I am past that barrier from user to programmer I am can finally understand and utilize all the amazing resources out there. Nonetheless I First had to “Face the pain” to get here and still have so much to learn :). ** when I started I literally didn’t know how to launch an application unless it was on the task bar or had a desktop shortcut. Honestly, these days, when I need to teach a complete newbie about programming, I just tuck some javascript into a script tag in an index.html file. Their entire dev environment is NotePad and a web browser - two things they already have and are familiar with using. Once they start getting the hang of basic programming concepts in JS, then I'll often pop them over to some other language and a dev environment they actually have to install. I definitely agree with the sentiment about the bootstrapping problem being a lot harder when a machine doesn't boot into a commandline.. Of my fairly wide range of acquaintance, I don't know anyone who a) can write simple code with some fluency and b) can't use the command line. Just for example, the Software Carpentry training organization starts every course with a morning on how to use the command line.. It's true that there is a wall of flak that one has to penetrate to start coding these days. I teach people to code starting with OCaml. The first assignment is fearsomely long and tedious configuration work. And god help you if you have a windows machine. But I don't think this is the reason it's harder to learn coding. It's harder to learn coding because 1. too many high schools are teaching students to "code" using the foolishness promulgated in Java and 2. because there are many, many solutions or near-solutions, often wrong or half-baked, on the web. New coders have to have the stamina to resist looking at 50 different solutions to a related problem. Great article!!!! Congratulations. I would like to publish it in Spanish, through Medium. There would be a problem? Regards. No problem. Please send a link when it is ready. Gracias! The problem with cloud9 is that you have to learn both how to do sysadmin junk, AND how the process to do that sysadmin stuff on cloud9's system deviate from the instructions that you'd find on a site telling you how to do the sysadmin on a normal machine. And every cloud provider differs in their own way. If you don't have a clear, concrete picture of how a system works before hand, you will be utterly lost at worst, and hopelessly brittle at best. I have witnessed this problem firsthand. I completely agree with everything said. I'm of the same vintage as many of you, I learned to program on an Apple ][ and a TI99/4A. I have an analyst who is very good with SQL and visualization tools and has been trying to do some scripting with PHP. He's getting better but you can see what a struggle it is for him to understand the things we took for granted 35 years ago. He has a deep understanding of the data layer but I think because of that he's often at a loss when he has to do anything procedural. He gets caught in the trap of "It's just there, why can't it just do this for me" and he's trying to force the flow to work like he thinks in SQL. Back in the day you had to understand the procedural stuff to get to the data now in many ways it's reversed. Dear Allen, first of all, thank you for your books. I worked through twice and skimmed the rest. It was a pleasure to read and fun to work with. I've been thinking about your post for a while and I was thinking: Can one separate "Learning to Program" from "Understanding Computers"? My point is, that for me, this is inseparable. To do good programming, I have to -- at least on superficial level -- understand how stuff works beginning with the hardware and ending at high-level programming languages. Similar like understanding the OSI 7 layer model if you want to program for The Internets. Despite your wish for it, you can not do everything in one book. It's impossible. The easiest jump-start I see is using tools like Anaconda or or jupyter or the likes. Which come ready made. But still then, to grok this, takes time. Maybe it needs a separate book? What about: "How to think like a computer." and then reference this book in the intro of all the other books. ;-) Best, Roland The best, easiest way for an absolute beginner to learn to program is using the BBC micro:bit. Its a tiny board, you plug in the USB cable to a computer, download the IDE. You program in a subset of Python (microPython). The IDE has a button called Flash, when you click on it your program is sent to the micro:bit and executed. The language and board are very limited, but it works, works very well. Great for young kids. Many years ago, there was a development tool by the name of Turbo Pascal. It ran on MS-DOS, the IDE was integrated with the compiler, etc. I spent an entire day learning the whole system, including the Pascal language. Apple had a tool called HyperCard, very easy to use programming tool. There are a lot of tools available for programing the Raspberry Pi, which is great, but a lot of folks now only use either Android or iOS devices. Its a shame that no one seems interested in creating programing tools that are simple to use and learn, but powerful enough to create useful applications on mobile devices. Dear Allen, Here too, thanks for your books. I tried to use them at some point with my two sons. I noticed that now that they are in college, just like the interns we had, they are using Anaconda and Jupyter Notebook. For larger projects, PyCharm. I did explain the advantages of Emacs to both my sons and some of our interns, but to no avail. And now, in my latest project, I just followed them and used PyCharm, Python, and (for me) KLayout). Pure magic ... So I would start with Anaconda (in a supervised install), and then they can run idle, Jupyter Notebook, Spyder, etc. Best regards, Erwin Or... you could approach this differently. For example, Visual Studio is available and free for Windows and MacOS. C# is an excellent language and is similar to Python in syntax. .Net is a wide and robust framework and comes with Windows and if you install VS for Mac, you get it as well. That does leave out Linux do a degree - but I'm finding it hard to believe that Linux users are the ones you're talking about - and there's mono for Linux which also gives you C# and .Net. Long story short - don't blame the audience - blame the performer. Hi Allen, Your post reminded me of an idea I had for a web-based "DSP explorer" tool previously. I worked in audio software for a few years, developing audio drivers for Windows and also firmware for devices. It's actually what inspired to start programming in the first place. I find DSP programming really interesting as the results can be "heard" instantly. However, it took a while before I got to the point where I could actually manipulate the audio data to do interesting stuff. Similar to the sentiment in your blog post - I didn't really want to know about pointers in C or how malloc works - I just wanted to try DSP on audio samples! So I thought it would be great to have a super simple tool for beginners to use, to accelerate that process. As you know, any DSP processing happens in an audio callback function. That's the heart of the whole thing, so why not have a web-based tool that plays an audio track and gives users access to the output callback, so they can modify audio samples (using Javascript for example) and hear the results instantly. You could even hook up some simple sliders to allow them to modify parameters in realtime. I think this would be really useful for learners who just want to get a feel for how they can use math to transform audio, with no need to get into OS concepts beforehand. There may already be tools like this out there, although I'm talking about something really easy to use (and much friendlier than Matlab ;-) ) Just an idea anyway - would be great to hear your feedback. Thanks, Daire This couldn't be further from the truth in my opinion. I started programming 25 years ago and it took grit and a real resolve to go through a manufacture's manual to figure out how to code. Today you got a gazillion ways to learn how to code. Just to prove my point, the SDK and Environment installation on my dad 286 took hours to install and an entire submanual to get ready. That's before you had the C compiler ready on that IBM2. Did I mention that the development environment was spread over 5 diskettes? Today it is a walk in the park compared to the old school days. Amazing article. It is however, missing development containers. Docker has fundamentally solved this problem. Most problems related to setting up a development environment is around downloading packages, setting up paths/default interpreters etc., there is no need to run another kernel in a VM. Containers solve all the problems above as they have their own root FS without being as heavy weight as Virtual Machines. Very similar to what _why said. Totally agree I started with a CoCo in 1981 with BASIC and a cassette "drive". I learned QuckBASIC, then VB, all to support contract database programming. I later got a couple of degrees in computer science. The most important thing to me would be the ability to have fun right away, with possibilities developing in the newbie's head. I've seen amazing reactions with R or even Excel VBA with instruction on "How do I get it to print my name over and over?" I miss those days sometimes.. Another problem is that languages added lots of convoluted bureaucracy and bookkeeping. Hello world in old basic 10 print "hello world" Hello world in Java public class HelloWorld { public static void main(String[] args) { System.out.println("Hello, World"); } } Way to complicated for no real reason. Especially since enforcing strict OOP is not even all that useful. There is no reason why main can't just be a top level function or why System.out.println can't just be println. Interesting viewpoint, but I'm not sure I agree. In response to your three initial points;). Thank you for writing this, it's a very helpful reflection on a very difficult problem.. I was convinced by Rich Hickey's exposition of the difference between "easy" and "simple", and the importance of choosing simple instead of easy: I think the main issues with IDEs for learners, local or cloud, is feature creep. Learners need a one-click install IDE that has very few menu options and does very little. It is easy to work out how to use it because there are very few options to choose from.. These days I have a pile of domain knowledge across wireless comms, instrumentation, lasers, digital signal processing et. seq. and I usually write embedded software in C and assembler on microprocessors, microcontrollers and DSP processors....? If you are reading this and starting out in software engineering then I hope you have the very best of luck going forward. I learn programming using Atari 800XL by reading programming books and magazines. Today reading is sooo hard that most beginer programers can't do that?. So, I guess it's swings and roundabouts. You have to get into using all the tools & help to be productive but it does add to the complexity. However, if it were too easy, everybody would be a programmer, right? Interesting points, wrong conclusions. When I was in high school there wasn't any computer programming classes. Now my high school has a whole department. My nephew is taking mobile programming classes. Cmon, if I had that in high school I could've done so much more.. Also, according to this logic, I should take my parents and stick them on a Unix system with no gui so they "learn how to code". Silly. > Also, according to this logic, I should take my parents and stick them on a Unix system with no gui so they "learn how to code". Silly.. And yet, it may well be wrong. Obvious things are often wrong.. Look at the official Raspberry Pi guide "our first recommendation for adults and older kids interested in getting started with the Raspberry Pi". Depending on the edition, chapter 1 is "Meet the Raspberry Pi". Chapter 2 is "Linux System Administration".. You're article has made me realize, that for someone coming into this cold, there really is a lot to learn and be familiar with in order to do some simple things. Thank you for making me consider this.? Regards, Mike I actually believe there is an agenda to keep programming elitist, not necessarily by programmers themselves but by those at the top of the pyramid. Computers should be about making life easier, and doing all the hard tedious work for you. Eg, it used to be that I could create a Textbox and give the command Str1$ = Textbox1 + "pie" And the environment would assume, ohh, what's in the textbox must be a string. Alternatively, I could give the command Num1 = Textbox1 * 2.4 And the environment would assume oh, now he's putting numbers in there. Now, very often, these things must be declared beforehand. Another example is machine language or assembler. By default, a very fiddly thing to do, because you're down to the level of putting ones and zeros into registers and accumulators, shifting and adding binary etc. That's a necessarily complicated thing to do. Then you work on a high level language like C++ and it's just as complicated, and I'm left thinking, this is just as complicated and unforgiving as working with a low level language like assembler. I don't think it need be like this for the vast majority of high level programming situations. Old versions of Visual basic used to be easy in this way, up until Version 5 came along. Languages and environments that make it easy, tend to be ignored, two in particular I can think of are REBOL and MIT Appinventor. High level languages should be about making life easier for the programmer and I think in general they don't. OK in the example above. a compiler needs to know whether what it is adding is a string, or a number, this is determined by the use of '$' as per early versions of basic. It ought to be possible for your average shop keeper to write his own code to carry out stock control, or whatever to run his business. Of course, it would be better if software to do this were freely available, so he does not have to re invent the wheel. And there is of course, but it would be nice if programming were not so intimidating for your average shop keeper. There are of course occaisions where complex, deep coding is necessay, scientific projects etc, but for the vast majority of applications, much looser programming environments should be used, eg scratch like applications like MIT's app inventor, Visual Basic, REBOL. When I was learning to code, I had to find hard to find books on the topic, that were often quite expensive in of themselves. My Parents barely knew what a computer was or what good one was. _"What would anybody ever need a computer in their house for?"_ That was the viewpoint of _most adults_ back in the day....). Those learning to program then didn't grow up with innate computer skills. So new programmers of the day learned about computers as much as they learned about programming them. Not only that, they _built_ the tools they then used to program with. New editors. New languages. New frameworks.. Who hears about compiler bugs these days?. I'm sorry, but learning to program today is easier than ever before on every single front I can think of except one: The spark of an idea that will solve something not yet solved. LOL. I started on a VIC20. I would totally swap with anyone learning today.. The good old days were not a golden age. Your glasses are stained rosy by age. To your points: 1. Every computer sold today has several development environments orders of magnitude easier and more powerful than a C64 prompt preinstalled. 2. Yeah, "SYNTAX ERROR" was surely more helpful :) You must have had some OTHER Commodore OS. I had lock ups and mysterious errors. It is always better now. 3. Cloud? If it is confusing, skip it. I love Bluemix, but there is a world of things you can program without ever knowing about the cloud..
http://allendowney.blogspot.com/2018/02/learning-to-program-is-getting-harder.html
CC-MAIN-2019-43
refinedweb
5,144
71.24
Insert, Delete, Search, Print an int Array in Java By: Grant Braught Printer Friendly Format public class Sorts { /** * Method which sorts the array referred to * by a using the insertion sort algorithm. * * @param a the array to be sorted. */ public static void insertionSort(int[] a) { // For each element in the array of integers... // (Note: the first element will not need to be considered // because it is already in order with respect to itself!) for (int loc=1; loc = 0 && a[i] > a[i+1]) { int tmp = a[i]; a[i] = a[i+1]; a[i+1] = tmp; i--; } } } /** * Method which sorts the array referred to * by a using the selection sort algorithm. * * @param a the array to be sorted. */ public static void selectionSort(int[] a) { // For each element in the array of integers... // (Note: the final element will not need to be considered // because by the the time it would be considered the array // will already be correctly sorted.) for(int loc=0; loc = left.length) { // The left array has been use up... rest = right; restIndex = rightIndex; } else { // The right array has been used up... rest = left; restIndex = leftIndex; } // Copy the rest of whichever array (left or right) was // not used up. for (int i=restIndex; guys please help me in inserting values on View Tutorial By: Fariha at 2015-06-15 20:44:04
http://www.java-samples.com/showtutorial.php?tutorialid=1360
CC-MAIN-2018-43
refinedweb
225
72.36
Although otherwise working with pointers, all Xlib functions I've seen so far pass their Window struct by value, not by reference. E.g.: Is there any particular rationale for that? In particular, in the case of XGetClassHint, the first and third parameters are passed by reference but the second isn't. It looks like Window in those examples isn't a struct; it's just an unsigned long. That is, given: #include <X11/X.h> Window w; If I pass that through gcc -E I see: $ gcc -E wintest.c | grep Window typedef XID Window; Window w; And XID is: $ gcc -E wintest.c | grep XID typedef unsigned long XID;
https://codedump.io/share/u7n6RK7GuFrp/1/what-does-the-xlib-specification-does-not-use-pointers-to-struct-window
CC-MAIN-2018-17
refinedweb
111
75.4
Some confusion on the Pin_out layout. - researcher last edited by Hello fellows. I have a lopy4 which I have upgraded to the firmware version 1.20.0 import os os.uname() (sysname='LoPy4', nodename='LoPy4', release='1.20.0.rc3', version='v1.9.4-c5b0b1d on 2018-12-17', machine='LoPy4 with ESP32', lorawan='1.0.2', sigfox='1.0.1') I want to implement the following example code on ATOM: from machine import Pin led = Pin('G16', mode=Pin.OUT, value=1) led(0) led(1) According to, the physical PIN 16 is the eighth from top to bottom on the left side (considering that the LED is on top), BUT!!! it only works when I connect PIN 18 (the forth from bottom to top). What could be the problem? I will try to downgrade the firmware. Any suggestions will be appreciated Thanks PS: PIN 13 on ATOM reflects on PIN 16 on Lopy4. See also my post here @researcher There are four different numbering schemes used for the Pins, and so it is totally confusing - The Pxx numbers of the modules. These can be used in the code as 'P1', 'P2', ... or Pin.module.P1, Pin.module.P2, .... These numbers are shown in the Pinout of the development modules - The Gxx numbers of the expansion board (and of the WiPy1 board). These can be used in the code as 'G1', 'G2', .. or Pin.exp_board.G1, Pin.exp_board.G2 .. These numbers are shown in the Pinout of the expansion board - The GPIO numbers of the ESP32, like GPIO1, GPIO3, ... This is useful for reference to the technical manuals of the ESP32 and should therefore be kept, but you cannot use these in your python code. - The physical pin number of the ESP32 package. These numbers are shown in the Pin-out of the module closest to the module picture. That is the one you picked, but this information is not useful at all, unless you make your own PCB with an ESP32 chip. So it should better not be shown in the Pin-Out of the module. Who ever is interested in that, would anyhow refer to the ESP32 vendor's documentation (which is espressif). When using development modules, most people use the 'Pxx' numbering, even if Pin.module.Pxx is a little bit more effective. But 'Pxx' is shorter to read and type. @researcher there are different names for the pins, the pin numbering on the LoPy match P* names, not G*. G* names match the numbering on the expansion board, I think.
https://forum.pycom.io/topic/4217/some-confusion-on-the-pin_out-layout
CC-MAIN-2020-34
refinedweb
425
74.49
df = dc * 10/5 + 32; this is totally wrong the right is df = dc * 9/5 + 32; Same for farenhit... dc=(df-32)*5/9 Post your Comment converter converter application converter application Develop a converter application using event-driven programming paradigm of Java. Procedure: 1. Design a menu bar with two menus. 2. The first menu has the following menu items, Distance, Currency currency converter converter code for audio file converter code for audio file how can i convert a audio file to a another file format like mp3 pdf to voice converter pdf to voice converter is it possible to implement PDF to speech converter by extracting text from pdf and then text to speech array in currency converter converter. Maybe it is the array concept that I am not understanding but if someone can In this example, you will learn how to convert degree to Celsius, Celsius... methods according to convert degree. Here, define class named " C Temperature Converter C Temperature Converter Here we are going to illustrates you how to convert temperature from Celsius to Fahrenheit in C. You can see in the given example, we prompt Temperature Converter in SWT Temperature Converter in SWT This example illustrates you how to convert temperature from...;{ getShell().setText("Temperature Converter");   Java Temperature Converter Java Temperature Converter In this tutorial, you will learn how to convert... equivalents of all Celsius temperatures from 0 to 100 degrees in 4 degree... degrees in 10 degree intervals. Example import java.util.*; Download Download JDK How to Download the latest Version of JDK? Download OST to PST converter from: Java question Java question Write a program that converts a (C to F Converter from 0 - 20) and writes the output to a file instead of the screen. Open the file in Notepad to confirm the output exe set up software - Java Server Faces Questions exe set up software hi i want a exe file creator software. i am doing a project in swings. now i need exe converter. pls send JSF Interview Questions . These converter classes are listed below : BigDecimalConverter... Converter Custom data converter is useful in in converting field data...; NumberConverter : Converter for dealing How To get DgroupId while using maven. converter ,i can't find the right "DgroupId", also i can't get how to make...=org.apache.maven.archetypes -DroupId=com.xebia.speech-to-text-converter -DartifactId=speech-to-text-converter" then it shows error "Failed to execute goal Java programming question - Java Beginners the following code: import java.io.*; import java.util.*; class Converter...(); } } public static void main(String[] args){ Converter convert=new Converter How to Create Button With TableView in iphone on the button can u please send me the code i want to create converter app. thanks wrong calucation vishalpatel May 16, 2012 at 11:56 AM df = dc * 10/5 + 32; this is totally wrong the right is df = dc * 9/5 + 32; Same for farenhit... dc=(df-32)*5/9 Post your Comment
http://www.roseindia.net/discussion/18921-Degree-Converter.html
CC-MAIN-2015-48
refinedweb
498
53.51
[ ] Dan Jemiolo updated MUSE-19: ---------------------------- Attachment: test-projects.zip All projects included in the file are Eclipse projects - you can import them into the IDE and build immediately. You should import into Eclipse WTP if you want to be able to run/test the applications on Tomcat/Axis (or your app server of choice). > Test projects for capabilities submitted thus far (re: IBM's WSDM 1.1) > ---------------------------------------------------------------------- > > Key: MUSE-19 > URL: > Project: Muse > Type: New Feature > Environment: These sample projects are Axis-based, so they run as WAR files. If you import them into Eclipse WTP, you can run/test them on Apache Tomcat w/o modification. Testing on another server type only requires that you create that server definition in Eclipse WTP. > Reporter: Dan Jemiolo > Attachments: test-projects.zip > > The test-simple project shows how the core resource and capability concepts are used to define resource interfaces. This is generic to WSRF/WSDM/etc, which we feel is important looking ahead to the reconciliation specs. > The test-wsrf project shows all of the WSRF capabilities, but is not a comprehensive unit test. > The test-wsdm-wef project is a unit test of the WEF API that makes sure that our WEF output is compliant and that we can re-consume it. --
http://mail-archives.us.apache.org/mod_mbox/ws-muse-dev/200605.mbox/%3C30520182.1147100301693.JavaMail.jira@brutus%3E
CC-MAIN-2019-43
refinedweb
212
55.24
I'm trying to make this program to add 3 user inputed populations together. Here is a summary of what it is supposed to do and the code: I know this isn't right but I need a little guideline to get me on the right track to make this right.I know this isn't right but I need a little guideline to get me on the right track to make this right.Code:/* Create the programs listed below per the specifications. You may not create any additional member function. Where listed, you must use the declaration provided. 1. Write a class called AssemblyLine. This class will be used to track the production of widgets. Each instance of the assembly line will represent different assembly lines that are owned across the US. Create three instance of the AssemblyLine class to represent the factories in Rochester, Fargo, and Nelson. Create a fourth instance that will sum up all the other instances. Create the following member functions: a. Two constructors. b. An input member function c. An output member function d. A function that will take other AssemblyLine instances and add them together. You must use the following declaration: void Addit(AssemblyLine AL1) e. One of the member functions must be externally defined.*/ #include <iostream> using namespace std; /////////////////////////////////////////////////////////////////////////////////// class AssemblyLine { private: int widgets, total; public: AssemblyLine() : widgets(0) { } AssemblyLine() : total(0) { } void Input() { cout << "Enter number of widgets: "; cin >> widgets; } void Output() { cout << "You entered: " << widgets; } void Addit(AssemblyLine AL1); }; //////////////////////////////////////////////////////////////////////////////////// void Addit(AssemblyLine AL1) { int pop1, pop1, pop3; cout << "Total population is: " << AL1 = pop1 + pop2 + pop3; } //////////////////////////////////////////////////////////////////////////////////// int main() { AssemblyLine Rochester, Fargo, Nelson, Total; Rochester.Input(); Rochester.Output(); Fargo.Input(); Fargo.Output(); Nelson.Input(); Nelson.Output(); Addit(Total); int x; cin >> x; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/85694-using-classes.html
CC-MAIN-2015-32
refinedweb
292
56.76
State Machina steyt *mah*-kuh-nuh 🤖 A super simple, zero dependency, tiny state machine for Flutter and other Dart apps. Basic Usage import 'package:state_machina/state_machina.dart'; // Define your states and events in a couple of handy enums for easy reference: enum States { enteringEmail, sendingEmail, success } enum Events { editEmail, sendEmail, sentEmail, failedToSendEmail } // Create a state machine like so: var state = StateMachine({ States.editingEmail: {Events.sendEmail: States.sendingEmail}, States.sendingEmail: { Events.sentEmail: States.success, Events.failedToSendEmail: States.error, }, States.success: {}, // This is a terminal state--no other states can be entered once we get here ☠️ States.error: {Events.editEmail: States.editingEmail} }); Now you can read the current state: // in some Flutter widget if (state.current == States.success) return SuccessMessage() And send events: // somewhere in your Flutter app... RaisedButton( onPressed: () async { try { setState(() { state.send(Events.sendEmail); }); await _sendEmail(); setState(() { state.send(Events.sentEmail); }); } catch (e) { state.send(Events.failedToSendEmail); } }, child: const Text('Submit') ) - You may pass an optional initialState as the second argument. If not, the initial state defaults to the key of the first entry in the state map. - Runtime exceptions will be thrown if you pass in an invalid state map (unreachable states, next states that don't exist) or an invalid initial state, or if you send an event that doesn't exist in the state map. - The type of individual states can be anything: String, int, object. You simply have to ensure that your state map is valid (you'll get helpful error messages if it isn't). - You could also store your states and events in classes, if desired: class States { static final String enteringEmail = 'enteringEmail'; static final String sendingEmail = 'sendingEmail'; static final String success = 'success'; } And for that matter, nothing is stopping you from passing in literal values: var state = StateMachine({ 'editingEmail': {'sendEmail': 'sendingEmail'}, 'sendingEmail': { 'sentEmail': 'success', 'failedToSendEmail': 'error', }, 'success': {}, 'error': {'editEmail': 'editingEmail'} }); Typically, strings or enums are the most useful types for the primitive keys and values in your state map. Listeners Listeners can be registered and will be called every time send is called, after send resolves the event and updates the current state. Listeners receive the current state, previous state, and event that triggered the listener: state.addListener((current, previous, event) { // Do some cool stuff here })
https://pub.dev/documentation/state_machina/latest/
CC-MAIN-2021-39
refinedweb
374
52.39
Welcome to the first installment of "Twisted Web in 60 seconds". The goal of this installment is to show you how to serve static content from a filesystem using some APIs from Twisted Web (while Twisted also includes some command line tools, I will not be discussing those here) and have you understand it in 60 seconds or less (if you don't already know Python, you might want to stop here). Where possibly useful, I'll include links to the Twisted documentation, but consider these as tips for further exploration, not necessary prerequisites for understanding the example. So, let's dive in. First, we need to import some things: Site, a factory which glues a listening server port to the HTTP protocol implementation: from twisted.web.server import Site File, a resource which glues the HTTP protocol implementation to the filesystem: from twisted.web.static import File The reactor, which drives the whole process, actually accepting TCP connections and moving bytes into and out of them: from twisted.internet import reactor Next, we create an instance of the File resource pointed at the directory to serve: resource = File("/tmp") Then we create an instance of the Site factory with that resource: factory = Site(resource) Now we glue that factory to a TCP port: reactor.listenTCP(8888, factory) Finally, we start the reactor so it can make the program work: reactor.run() And that's it. Here's the complete program without annoying explanations: from twisted.web.server import Site from twisted.web.static import File from twisted.internet import reactor resource = File('/tmp') factory = Site(resource) reactor.listenTCP(8888, factory) reactor.run() The Twisted site has more web examples, as well as some longer form style documentation. Bonus example! For those times when you don't actually want to write a new program, the above implemented functionality is one of the things which the command line twistd tool can do. In this case, the command twistd -n web --path /tmp will accomplish the same thing as the above server. Keep an eye out of the next installment, in which I'll describe simple dynamic resources. This?
https://as.ynchrono.us/2009/09/twisted-web-in-60-seconds-serve-static_16.html
CC-MAIN-2020-16
refinedweb
357
53.81
On 03 Sep 2003 07:12:07 +0200, martin at v.loewis.de (Martin v. =?iso-8859-15?q?L=F6wis?=) wrote: >rimbalaya at yahoo.com (Rim) writes: > >> How can I control the location of the bytecode files? > >You currently can't. See PEP 304, though, at > > > >Comments on the PEP are encouraged: If you won't comment now on >whether the proposed change would solve your problem, it might be that >you find something useless got implemented later. > Personally, I am a minimalist when it comes to environment variables. IMO that name space has the same problems as any global namespace, and since a single default set of user's environment variables tends to presented to most programs s/he runs from a command window, the name space usage tends towards a hodgepodge and/or wrapping apps in setup scripts (which can work fine, but I still don't like it as a standard way to go). IMO the os.environ name space should be treated like a root directory name space, and not have application data per se in it (with reluctant exceptions where it is used wholesale as in CGI). Rather, IMO, and only if necessary in the first place, it should be used to specify location or search path info for config data, not *be* config data. And a user-set environment variable should not be able to cause a bypass of root/admin-defined config info where the distinction is necessary. (The PYTHONBYTECODEBASE variable does refer to a directory, but now that makes two variables, counting PYTHONPATH, and where will it end?) Provision for admin/root level config data separate from user preference and session state type config data should be made as necessary and desirable, but secondary/user config data search should be controllable by the primary/root/admin config data (which e.g. could say to ignore or use user-controlled/attempted-control environment variables etc.). This would seem to imply a standard place to look for root/admin level config data, not directed by env variable. E.g., a read-only file in the same directory as the python interpreter executable, with, say, .conf or .cfg appended to the name. *That* file can then specify how/whether to look for user config stuff etc., or specify password access to some features, etc. etc., if we wind up doing restricted exec stuff. A user config file overriding any other *user* config info could be specified by command line option, e.g., -cfg myConfigFile.conf, and whether this or other command line options were allowed could be (and should be able to be when control is necessary) specified in the root/admin config file. ... just my .02USD Regards, Bengt Richter
https://mail.python.org/pipermail/python-list/2003-September/195163.html
CC-MAIN-2018-09
refinedweb
456
62.17
Tcl_DetachPids man page Tcl_DetachPids, Tcl_ReapDetachedProcs, Tcl_WaitPid — manage child processes in background Synopsis #include <tcl.h> Tcl_DetachPids(numPids, pidPtr) Tcl_ReapDetachedProcs() Tcl_Pid Tcl_WaitPid(pid, statusPtr, options) Arguments - int numPids (in) Number of process ids contained in the array pointed to by pidPtr. - int *pidPtr (in) Address of array containing numPids process ids. - Tcl_Pid pid (in) The id of the process (pipe) to wait for. - int *statusPtr (out) The result of waiting on a process (pipe). Either 0 or ECHILD. - int options (in) The options controlling the wait. WNOHANG specifies not to wait when checking the process. Description. Tcl_WaitPid is a thin wrapper around the facilities provided by the operating system to wait on the end of a spawned process and to check a whether spawned process is still running. It is used by Tcl_ReapDetachedProcs and the channel system to portably access the operating system. Keywords background, child, detach, process, wait Referenced By Tcl_ReapDetachedProcs(3) and Tcl_WaitPid(3) are aliases of Tcl_DetachPids(3).
https://www.mankier.com/3/Tcl_DetachPids
CC-MAIN-2017-26
refinedweb
160
57.27
We are trying to get an F5 BIG-IP LTM iRule working properly with SharePoint 2007 in an SSL termination role. This architecture offloads all of the SSL processing to the F5 and the F5 forwards interactive requests/responses to the SharePoint front end servers via HTTP only (over a secure network). For the purposes of this discussion, iRules are parsed by a Tcl interpretation engine on the F5 Networks BIG-IP device. As such, the F5 does two things to traffic passing through it: We've got part 1 working fine. The main problem with part 2 is that in the response rewrite because of XML namespaces and other similar issues, not ALL matches for "http:" can be changed to "https:". Some have to remain "http:". Additionally, some of the "http:" URLs are difficult in that they live in SharePoint-generated JavaScript and their slashes (i.e. "/") are actually represented in the HTML by the UNICODE 6-character string, "\u002f". For example, in the case of these tricky ones, the literal string in the outgoing HTML is: http:\u002f\u002fservername.company.com\u002f And should be changed to: https:\u002f\u002fservername.company.com\u002f Currently we can't even figure out how to get a match in a search/replace expression on these UNICODE sequence string literals. It seems that no matter how we slice it, the Tcl interpreter is interpreting the "\u002f" string into the "/" translation before it does anything else. We've tried various combinations of Tcl escaping methods we know about (mainly double-quotes and using an extra "\" to escape the "\" in the UNICODE string) but are looking for more methods, preferably ones that work. Does anyone have any ideas or any pointers to where we can effectively self-educate about this? Thanks very much in advance. For future reference since this has come up recently as well... Sharepoint Alternate Access Mappings is the cleanest solution. For a BIG-IP iRule you can try something like this. Note that TCL needs to interpret the unicode characters before it can match them. when HTTP_REQUEST { # Disable the stream filter by default STREAM::disable } when HTTP_RESPONSE { if {[HTTP::header value Content-Type] contains "text"}{ set find_str "http:\u002f\u002fservername.company.com\u002f" set replace_str "https:\u002f\u002fservername.company.com\u002f" STREAM::expression "@$find_str@$replace_str@" STREAM::enable } } Aaron I know nothing about sharepoint, but at least from a pure tcl perspective, the following shows how to match your literal string: tclsh> set string {http:\u002f\u002fservername.company.com\u002f} http:\u002f\u002fservername.company.com\u002f tclsh> regexp {http:\\u002f} $string 1 The important thing is to make sure that the backslash in \u002f isn't interpreted by the tcl parser (by enclosing the pattern in {}) and to also make sure the backslash isn't interpreted by the regular expression parser (by escaping it with a backslash). @JD, I probably should have added this: The SharePoint Web Application in question is currently hosted on a physical server and SharePoint Farm that is integrated (on other Web Applications) with SQL Server 2005 Reporting Services (SSRS) and PerformancePoint 2007 (PPS). Both products DO NOT support Alternate Access Mappings (AAM). While it's true that the Web Application we want to do the above with is not integrated with SSRS or PPS, our architect and our technical leads would prefer to avoid using AAM at all in that environment if at all possible. So while in principle, I agree with your answer, in this case, I am asking for special exception and am asking about Tcl and UNICODE escaping in particular because I and our other technical strategists believe that this is the right answer in this particular situation. What you are trying to do is not recommended with SharePoint. The recommended approach is to use Alternate Access Maps inside of SharePoint. You will create a public URL of https:// and have an internal name of http://. This will cause all URLs on your page to be created using https:// You may want to ask this on ServerFault if you do not like the answer. By posting your answer, you agree to the privacy policy and terms of service. asked 6 years ago viewed 3520 times active 2 years ago
http://serverfault.com/questions/14261/f5-networks-irule-tcl-escaping-unicode-6-character-escape-sequences-so-they-ar?answertab=active
CC-MAIN-2016-18
refinedweb
706
51.38
/* memrchr -- find the last occurrence of a byte in a memory block Copyright (C) 1991, 1993, 1996, 1997, 1999, 2000, 2003, 2004, 2005, 2006, 2007 defined _LIBC # include <memcopy.h> #else # include <config.h> # define reg_char char #endif #include <string.h> #include <limits.h> #undef __memrchr #undef memrchr #ifndef weak_alias # define __memrchr memrchr #endif /* Search no more than N bytes of S for C. */ void * __memrchr (void const *s, int c_in, size_t n) { const unsigned char *char_ptr; const unsigned long int *longword_ptr; unsigned long int longword, magic_bits, charmask; unsigned reg_char c; int i; c = (unsigned char) c_in; /* Handle the last few characters by reading one character at a time. Do this until CHAR_PTR is aligned on a longword boundary. */ for (char_ptr = (const unsigned char *) s + n; n > 0 && (size_t) char_ptr % sizeof longword != 0; --n)); /*; if (8 < sizeof longword) for (i = sizeof longword - 1; 8 <= i; i--) if (cp[i] == c) return (void *) &cp[i]; if (7 < sizeof longword && cp[7] == c) return (void *) &cp[7]; if (6 < sizeof longword && cp[6] == c) return (void *) &cp[6]; if (5 < sizeof longword && cp[5] == c) return (void *) &cp[5]; if (4 < sizeof longword && cp[4] == c) return (void *) &cp[4]; if (cp[3] == c) return (void *) &cp[3]; if (cp[2] == c) return (void *) &cp[2]; if (cp[1] == c) return (void *) &cp[1]; if (cp[0] == c) return (void *) cp; } n -= sizeof longword; } char_ptr = (const unsigned char *) longword_ptr; while (n-- > 0) { if (*--char_ptr == c) return (void *) char_ptr; } return 0; } #ifdef weak_alias weak_alias (__memrchr, memrchr) #endif
http://opensource.apple.com/source/gnutar/gnutar-450/gnutar/lib/memrchr.c
CC-MAIN-2014-42
refinedweb
254
62.11
Post your updated code so I don't have to keep going back Also, is a great way to learn some of what you are doing (moving sprites). It shows basic code to do some pretty cool stuff. Post your updated code so I don't have to keep going back Also, is a great way to learn some of what you are doing (moving sprites). It shows basic code to do some pretty cool stuff. Edited by Akill10: n/a The MouseListener is there for later. import javax.swing.*; import java.awt.*; import java.awt.event.*; public class main extends JFrame implements MouseListener{ class keyl implements KeyListener{ @Override public void keyPressed(KeyEvent e) { s.RestoreScreen(); } @Override public void keyReleased(KeyEvent e) { s.RestoreScreen(); } @Override public void keyTyped(KeyEvent e) { s.RestoreScreen(); } } public static void main(String[] args) { DisplayMode dm = new DisplayMode(800,600,16,DisplayMode.REFRESH_RATE_UNKNOWN); main m = new main(); m.run(dm); }/* while(right == false){x--;}while(right == true){x++;} while(down == false){y--;}while(down == true){y++;} if(x <= 0){right = true;}if(x >= m.getWidth()){right = false;} if(y <= 0){down = true;}if(y >= m.getWidth()); */ private title s; private Image bg; private Image ball; private static int x = 0; private static int y = 0; private static boolean right = false; private static boolean down = false; private boolean loaded = false; Timer moveball = new Timer(10,new ActionListener(){ public void actionPerformed(ActionEvent e){ if(right == false){x--;}if(right == true){x++;} if(down == false){y--;}if(down == true){y++;} if(x <= 0){right = true;}if(x >= 800){right = false;} if(y <= 0){down = true;}if(y >= 600){down = false;} repaint(); } }); public void paint(Graphics g){ g.clearRect(0,0,getWidth(),getHeight());(ball,x,y,null); } g.drawString("Click To Start or Press Any Key To Exit",180,500); } public void run(DisplayMode dm){ setBackground(Color.BLACK); setForeground(Color.YELLOW); setFont(new Font("Comic Sans MS",Font.PLAIN,24)); loaded = false; loadpics(); addMouseListener(this); addKeyListener(new keyl()); title s = new title(); moveball.start(); try{ s.setFullScreen(dm,this); } catch(Exception ex){} } public void loadpics(){ bg = new ImageIcon("C:\\Documents and Settings\\Lam\\My Documents\\My Pictures\\fun.png").getImage(); ball = new ImageIcon("C:\\Program Files\\Ds Game Maker\\Resources\\Sprites\\Ball.png").getImage(); loaded = true; repaint(); } @Override public void mouseClicked(MouseEvent arg0) { s.RestoreScreen(); } @Override public void mouseEntered(MouseEvent arg0) { } @Override public void mouseExited(MouseEvent arg0) { } @Override public void mousePressed(MouseEvent arg0) { } @Override public void mouseReleased(MouseEvent arg0) { } } Ok, so what KeyEvent should it restoreScreen() on? You haven't put anything. Also, again, why do you have a separate class for key listener when you could implement it into the main class? Edited by Akill10: n/a Hmm? Looks like i'll have to read a KeyListener tutorial... I had already implemented the MouseListener so I don't think I can implement another thing. Hmm? Looks like i'll have to read a KeyListener tutorial... I had already implemented the MouseListener so I don't think I can implement another thing. To set a Key to do something, you need to 'listen' to what key is being pressed/released, so: public void keyPressed(KeyEvent e) { int key = e.getKeyCode();//method that will return which key is active if(key==KeyEvent.VK_ESCAPE){ // if the getKeyCode() returns the key event ESCAPE //do something } } Yep, you can implement multiple interfaces! Just separate with a , e.g. implements ActionListener,KeyListener Also, your ActionPerformed method is making me cringe...sorry ha! To make it look less confusing, I am going t give you a simpler way. //First, declare 2 more variables with your x and y, int velX =2,velY=2, these will control the "speed"; //then in your actionperformed method public void actionPerformed(ActionEvent e) { if(x<0 || x>800){ velX = -velX; } else if(y<0 || y>600){ velY = -velY; } x+=velX; y+=velY; repaint(); } This should bounce your ball about your screen, I assume that was what you were doing before. EDIT: I dont know if this is my bad eyesight, but have you set the class focusable? You need to do this in order to use listeners setFocusable(true); Edited by Akill10: n/a . I haven't time to check out the key listener problem right now, but provided all the code is there, the most common reason for it not working is that the component with the key listener is not the component with keyboard focus. Wow, I hadn't realized there was that much things about listeners... Do you have to make class focusable? I don't think I've ever made a class focusable, not on purpose anyway and my ActionListener works always. Anyway, thanks for everything again guys. Pressing Esc doesn't do anything even though I have: public void keyPressed(KeyEvent e) { int rsKey = e.getKeyCode(); if(rsKey == KeyEvent.VK_ESCAPE){ s.RestoreScreen(); } } I've added the KeyListener. Also I get lot's of errors or whatnot in the console box: Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException at main.keyPressed(main.java:9) at java.awt.Component.processKeyEvent(Unknown Source) at java.awt.Component.processEvent(Unknown Source) at java.awt.Container.processEvent no idea what's wrong with my KeyPressed method... I cut and pasted the KeyEvent code (int rsKey = e.getKeyCode();if(rskey ==) ect. to the KeyTyped method and this time, no errors in the console box yet it still doesn't 'RestoreScreen'. You have a null pointer exception on line 9 of main (see - I can read too!). This means either an uninitialised variable, or a method call that has returned null. You can look at your source code for that line and it will probably be obvious what can or can't be null. . Good to know, thanks. Right; so if I'm right, s.RestoreScreen() returns a null? And that means there's something wrong in my title class... But I swear it was working when I used thread.sleep(milliseconds). Do you guys want the code from title.java? ...
https://www.daniweb.com/programming/software-development/threads/329203/why-won-t-it-happen/2
CC-MAIN-2017-34
refinedweb
1,000
66.64
Mea Culpa After apologizing for certain misdeeds and offering a glimpse of the future, Jack asks whether an RTOS is even worth the trouble. I begin this column with an apology. I did something bad, and I'm sorry. If you were with me last month, you read about the disaster of my recent move and the fact that the movers had to wrest the keyboard from my hot, sweaty fingers to get the computer in its box. I suppose that's the reason they bunged it up during the move, trashing my RAM and perhaps my hard drives. In that fateful column, I had promised to upload the “minimizer lite” version of the algorithm I've been working on for so long. I had checked out the algorithm in Mathcad, and verified that it worked beautifully, but I had not yet finished turning the changes into code. I figured, how hard can it be? The important deadline is to get the prose to the editors on time. Coding can take longer because I have more time in which to do it. There's typically a two-month delay between the time I send the column to the editors at Embedded Systems Programming , and the time you get to see it. I figured it was a slam dunk that I'd get the code complete and uploaded before you saw the column saying that it was uploaded. In short, I floated a rubber check. And it bounced. Who would have guessed that it would take three months for me to get back online again? Usually it's a time measured in days. And who would have guessed that when I did get online, the data, including my partially finished code, would be gone to that bit bucket in the sky? I cheated, and I got caught. Sorry. Trust me, I've already been read the riot act by my editors, and it's a lesson I learned well, the hard way. Coming attractions The name of the column is “Programmer's Toolbox,” and my stated goal has been to provide you with tools and techniques to aid in the development of embedded systems. It occurred to me recently that you might be wondering what sorts of embedded systems I work on, that I need things like function minimizers, root crackers, and so on. Such routines seem more suited for huge math problem solvers than embedded systems. Your embedded systems probably don't look much like mine. Even so, trust me on this: everything I've told you about came from the development of an embedded system. In fact, my interest in the minimizer came directly from a production, commercial product. That same system also used a real-time least squares fit. The embedded systems that I work on tend to use a lot of math. During the time I was occupied by the minimizer, many other topics attracted my attention, and were put on the back burner. Now that the minimizer is coming together, I've begun turning my attention to them. I thought you'd like to know what's coming up in the near future. Here's a partial list: - The Nelder-Mead, or Simplex method (yet another minimizer!). This workhorse is probably the most-used minimizer for multivariate systems, and is definitely worth a look. But I'm putting it off for a time, mainly because I'm worn out with minimizers for the moment. - Spline functions. Everybody and his brother has a spline function algorithm in his toolbox, and most of them work just fine. But I needed one that gave the derivatives of the function, as well as the fitted function. The canned routines don't do that. So I had to derive my own functions. In the process, I learned that the derivation isn't at all easy. The spline algorithm seems to be another one of those things, like Brent's method, that everyone uses but no one (or, to be more precise, few people) has bothered to derive for themselves. - Optimal curve fits. This is not the same thing as a spline fit, because we don't necessarily require that the fitted function pass through the data points. - The least squares fit. I covered this one once before, years ago, but it's been a long time. It's due for a revisit. - Chebyshev polynomials, which are used to improve function approximations. - Laplace transforms: second nature to EEs, a mystery to everyone else, and gateway to z-transforms. - The Kalman filter. Yes, I know, others have done this in Embedded Systems Programming (“Kalman Filtering,” June 2001, p. 72). As you might expect, my approach will be different. - Nonlinear root finder. I've talked about this one several times during the effort on a minimizer, and I just happen to have the world's best algorithm in my back pocket. It's only fair that I share it with you. Is it possible that such things are in embedded systems? Yes and no. In once sense, they must be, because I'm using these techniques. In another, though, it's worthwhile to mention that not all the software for an embedded system actually ends up in the system. When we're building an embedded system, we also need software to test it. Most days, that means a simulation that exercises the code as it would be run in the real world. Someday perhaps, I'll write an article about how to test real-time software. Or you can buy Jim Ledin's new book, Simulation Engineering (CMP Books, 2001), which covers everything you ever wanted to know about simulations. Suffice it to say here that often a very sophisticated simulation is needed to be sure that algorithms always work. If you'd like to see me cover any other topics, don't hesitate to contact me. And now for something completely different But I don't want to talk about any of those things this month. Instead, I'm in the mood for a change of pace. I want to talk about real time operating systems (RTOSes). I want to because I'm seeing trends in the industry that I'm not sure I'm ready for. Before we get too far into the notion of an RTOS, we're first going to have to define what we mean by the terms real time and embedded system . Make no mistake: I'm talking real-world definitions here, not pedagogical, theoretical definitions. In general, I tend to write about the things I've actually worked on in the past, and this column will be no different. I used to think I knew what real-time embedded systems were. They were things like numerical process controllers, chemical plant controllers, and flight computers for things that fly or swim and blow stuff up. Now that we have computers in video recorders, telephones, and those glorified calculators called personal digital assistants (PDAs), the definition is beginning to get a little fuzzy. I used to teach a course in software engineering for my company, and the illustrator we used was a wonderful cartoonist. He drew me a picture of an engineer, running along beside a tank with a Teletype ASR-33. The caption was, “Embedded systems are hard to debug.” I mean that kind of embedded system. Likewise, it's often been said that any system can be considered real time if your definition is loose enough. Many Unix aficionados tend to say that Unix and its clone, Linux, are real-time systems. But surely, they can only be defined so loosely. To paraphrase Bill Clinton, it all depends on what “real” is. There are two kinds of people in the world: those who categorize things into two kinds, and those who don't. Most engineers categorize real-time systems into the “hard” and “soft” varieties. A soft real-time system is one in which tasks need to be done promptly, but not at a precise time or on a precise schedule. When some folks talk about a system being “real time,” they mean that it must complete its work in a time that's short compared to whatever the user considers a reasonable wait. By that definition, any OS is an RTOS. Except perhaps the old batch processing OSes, which still exist in some business mainframes. Airline reservation and point-of-sale systems are “real time” in the sense that we'd like a response the same day. But that hardly qualifies as a real-time system in the usual sense, and most of us don't accept such a definition. Soft real-time embedded systems might include printers, cellphones, PDAs, and digital video recorders. Hard real-time systems must perform their duties either at regular, specified time intervals, or respond within time intervals that are tightly controlled, both in response time and in jitter -the variation in response from step to step. Such systems tend to control things and have digital filters inside. If the system doesn't do things on schedule, data gets missed and the filters don't work properly. Since most of the systems I work on end up flying, or otherwise calculating based on streaming data, they tend to be very, very hard. I'll leave it to the academics to find definitions that are completely unambiguous. As one Supreme Court justice said concerning pornography, “I may not be able to define it, but I know it when I see it.” Why an RTOS? The first question we should ask ourselves is do we even need an RTOS? The first microcomputers didn't. When Intel first introduced their 8080 microprocessor (a real piece of work, by the way, which I've praised before as a breakthrough of biblical proportions), they wrote up one of those ads masquerading as an application note, showing an 8080 controlling a traffic light. Four pressure pads, six sets of light bulbs. That was the extent of their imagination. You don't need much of an RTOS for a system to run a traffic light. (Good as Intel was in their design of the chip, I don't think they ever, in their wildest dreams, envisioned personal computers, CP/M and 64KB Altairs, Osbornes, Morrows and Kaypros. It was the hobbyists that saw the potential of the 8080 and the later chips like Z80, 6800, and 6502. Guys like Gary Kildall, Bob Albrecht, Dennis Allison, Carl Helmers, and the two Steves: Jobs and Wozniak.) A few years ago, I attended one of Robert Ward's conference presentations on the taxonomy of real-time, embedded systems. I won't begin to try to fill Robert's shoes, but the “RTOS” for that traffic light controller was right there at the beginning of his taxonomy. In pseudocode, it is: do something do it again Or, if you prefer something more structured: loop forever do somethingend loop That's it. No multitasking; no interrupt handler; no TCP/IP stack. No priority queues. No classes. No persistent objects, except in the sense that everything was persistent, until you turned the power off. The first program I wrote for the 8080, back in 1976, worked this way. It was a controller for a satellite-tracking antenna and it worked by performing a Kalman filter on the tracking data. If I'd known enough in those days to write pseudocode at all, it would have been: loop forever take a measurement update the state point the antennaend loop Maybe I should have thrown in some initialization, but really, that was about it. Timing was not an issue. You took a measurement as soon as you could, which meant as soon as you finished processing the last one. What else did the CPU have to do? About the same time, I also wrote a program for an even earlier chip, the 4-bit 4040, to control a cold forge machine. It looked like this: while !E-stop loop drop a blank into place move the anvil to its position whang it with the forge retract them both (finished part falls out)end loop Oh, there were a bunch of safety related tests, like checking the Emergency-stop button before every operation, and also readings of the control buttons and sensors, to tell if we were doing the right thing. But basically, that was the program. Again, we had no need for even a timer, because as soon as the CPU was finished with one loop, we wanted it to do it all again. You'd be surprised how often this simple structure works. Back before the Internet, there were bulletin board systems (BBSes) and CompuServe. Ward Christensen developed both a program and a protocol for transferring data over phone lines, called Modem7. It worked pretty doggone well, better in some ways than AOL. (Think of it: no flashy pop-ups, no self-regenerating porn ads, no computer viruses attached to e-mail.) The problem with Modem7 was the classic one of chicken vs. egg. Though not at all large by modern standards, Modem7 was not the kind of thing you wanted to type into your assembler. You needed to download it, and before you could do that, you needed Modem7. Catch-22. To get around that little problem, they came up with a bootstrap program called Boot7, whose only function was to download Modem7. I had a listing for Boot7, but somehow I made a mistake typing it in, so I couldn't get online. To at least get me talking, I wrote my own modem program. Here it is, in all its glory: char key = ' ';while key != ^Q loop if (char in serial buffer) c = getchar write c to CRT endif if (key == inkey) send key to modem endifend loop Incidentally, thanks to heavy use of well-done BIOS functions by Teletek, the manufacturer, that little gem used exactly 19 bytes of Z80 assembler language. Not bad, eh? No megabyte RAMs needed, thank you very much. Later, I added a rudimentary-not even circular-text buffer. Acting on a control key (probably ^D), I'd start capturing downloaded text into the buffer (being careful not to overflow it-a caution modern programmers seem to sometimes forget). I'd run the program under DDT, the CP/M debugger, and after I was done, I'd write the buffer to a file. The program kept a character count to tell me how many sectors to write. Not elegant, I admit, but it let me download Boot7, with which I downloaded Modem7, with which, well, you get the picture. Is there a message in all these war stories? You bet there is: “Don't use an RTOS when you don't need one.” On being on time In the aerospace industry, most gadgets are just a smidge more complicated than the simple, loop-as-fast-as-you-can systems I've described. The main issue that characterizes control systems in the aerospace world is time. Typically, an aerospace system reads things like gyro and accelerometer outputs, and uses them as the source for steering commands. As I hinted earlier, such systems tend to use digital filters of a fairly high order. What characterizes such filters is that the measurements must be made as on-time as one can manage. The filters depend on measurements taken at regularly spaced intervals. To accommodate that requirement, we added a real-time clock (RTC), nothing more than an interrupt that fired on a regular schedule. In pidgin Ada, the structure looks something like this: loop accept RTC read sensors update filters correct steeringend loop In this Ada-ese, the program may seem as though it's in a continuous loop. I mean, that's what loop…end loop usually means, right? But appearances can be deceiving. See that accept RTC in the first line? That identifies the code as an interrupt handler. In Ada task-speak, the handler task is blocked until the interrupt is received. When it's received, the handler runs to completion, then sits and does nothing until the next clock tick. There is no real loop there, only an interrupt handler that runs each time it's pulsed by the interrupt. If the CPU is idle until an interrupt comes in from the RTC, what does it do in the meantime, twiddle its thumbs? No, it executes the background task, whatever that is. The pseudocode for that task often reads: loop perform self-test report errorsend loop This one is a true loop, in that it runs as fast as it can. In short, it uses all the CPU resources not absorbed by the interrupt handler. In addition to the self-test, we often give it slow tasks to run, and these can, and often do, include a Kalman filter. Kalman filters tend to be slow and require floating-point arithmetic. In the old days, floating point usually meant software floating point, so it was slow. Unlike other digital filters, the Kalman filter doesn't much care about timing; it takes the time of the measurements into account. So we let the Kalman filter take as long as it needed, chugging along in the background whenever the fast stuff wasn't running. For the record, the code for that multivariate function minimizer ran in the background. It took as long as it took, which wasn't very long on a fast 486 with its hardware math processor. Multitasking I chuckle every time I see Unix/Linux described as a multitasking OS. Ditto for Windows. I suppose they are, in the lingo of computer science, but they certainly aren't what we embedded folks thought of as multitasking. To an aerospace software engineer, multiple tasks typically mean cyclic tasks, each running at different, but fixed, rates. This approach was taken simply for expediency and out of an acceptance of the facts of life; our CPUs weren't fast enough to do everything at the fastest interrupt rate. So we ran only what needed to be done fast at the interrupt rate. Everything else was counted down from that fastest rate. In a typical multitasking system, we had cyclic tasks at rates like 1,000Hz, 500Hz, 100Hz, 20Hz, 10Hz, and 1Hz. The RTC ran, of course, at the 1,000Hz rate. Its pseudocode might be: loop accept RTC do 1000Hz if divide_by(2) do 500Hz endifend loop The 500Hz task, of course, would divide by five to call the 100Hz task, and so on. Simple. Whenever you have multiple tasks, you confront the issue of sharing data between them. Because tasks can be interrupted by higher priority tasks, we must make sure that the data doesn't change between accesses. Typical solutions include semaphores and mutual exclusion mechanisms. The beauty of a cyclic scheduler is that it rarely needs such stuff. If the 1,000Hz task is running, it can be certain that slower tasks aren't. Therefore, their data is not going to change during the life of the faster task. If it needs data from them, it simply goes and reads it. The converse is not true, but easily handled. When I'm running in, say, the 1Hz task, I have to be sure that the data I'm operating from doesn't change during my computation. So the faster task is not allowed to stuff data into the slower one; the slower one must go get it, and make a local copy. Better yet, the faster task simply passes the data as parameters: do 500Hz(data1, data2, …) The compiler takes care of making the local copy. Computer science purists will cringe at this one, but, in a pinch, I've been known to suspend interrupts during the time I'm fetching or stuffing data. It's hardly elegant, but it's fast and simple. Build or buy? Whenever I start a new project involving real-time systems, my first task is deciding whether to buy a commercial RTOS or “roll my own.” It's not a decision to be sloughed over. But whatever decision I make, I find that its consequences are similar to deciding whether to use someone else's code: whichever way I go, I find myself wishing, at some time or another, that I'd gone the other way. If I roll my own RTOS, I'm faced, not only with writing my own RTOS, with all its potential for hidden and subtle bugs, but with justifying my decision to management when the task takes longer than I expected. If I use an off-the-shelf RTOS, I find myself with a system that's much more complex than I usually need, and I'm spending time climbing the learning curve when I could have been writing code. Neither approach is completely satisfying or free of risk, so we should weigh the pros and cons carefully. Leaf through the pages of this magazine, and you'll find ads from a number of RTOS vendors. They're our advertisers, bless 'em, and all of their products are good; some are great. Most of them do their job as advertised. Even so, I find that almost all give me more gizmos than I really need. They include all the features that users clamor for, such as the ability to dynamically create and remove tasks; sophisticated priority mechanisms, including dynamic adjustment of priorities; multiple methods for passing messages and/or data between tasks, and so on. Much of that power is wasted for many of the problems I deal with. Not only am I carrying extra baggage around in the form of mechanisms I don't really need, but even the things I do need take longer, thanks to mechanisms that are more general than I require. Then there's the frustration factor. Only last year, I was listening to my colleagues bemoan their fate, because the device driver that came with a commercial RTOS was broken. They couldn't get the vendor to even admit that it was broken, much less get a commitment as to when it would be fixed. A word to the wise: if you use an off-the-shelf RTOS, get source. Until the last few years, my build/buy decision always came down on the build side. It's true that cyclic executives are a special case, and perhaps deserve special treatments. Today, the trend is much more towards asynchronous tasks, and especially to tasks that are dynamically created. I know that. Even so, I'm not completely convinced that complex solutions are necessary. As recently as 1994, I was working on a contract to develop real-time software for yet another antenna controller. The customer had chosen the Motorola 68332 chip-an excellent choice, in my opinion-and I was helping him make the build/buy decision. I looked at the features the 68332 provides: - Built-in RTC with programmable rate - Built-in watchdog timer - Built-in serial and parallel ports, with programmable interrupt behavior - Built-in background debug capability - Built-in counter-timer chip with ability to run independently of CPU I looked at all those features, and thought, “Dang, all I have to do is write the interrupt handlers, and the RTOS is mostly done.” That particular build/buy decision was easy. During that decision-making process, I talked to a number of RTOS vendors. Most were extremely helpful and savvy, and I wouldn't have had a problem using any of their products. One company (sorry, can't recall the name) offered the RTOS complete with source code in C, for $1,000. That's cheap. It's much less than what our four-man team burned in a single day. It's hard to justify building your own RTOS when good stuff is available so economically. Even so, we felt that the time spent learning how to use someone else's OS would probably cost more than the time to write our own. So we did. And we never regretted it. That was then, this is now Fast forward to 2002. Now we have super-fast processors with tons of RAM and, often, hard drives for mass storage. The problems tend to be more complex than they used to, and involve a lot more asynchronous tasks. We have RTOS vendors out the gazoo, and virtually all the systems can be counted on to come with free GNU C/C++ development tools. So do we build or buy? I think I'm suggesting that the issue is still not clear, and that you should make the decision with considerable thought. Some problems are so complex that the notion of writing one's own OS is too horrible to contemplate. Others are simple enough that 90% of the RTOS's capabilities are going to go unused (and you're going to take both a performance and a memory hit). Each decision needs to be made based on the individual situation. In our family room, we have one of three TiVo video recorders. If you haven't tried one, you should. We almost never watch live TV anymore, except to check on the news. By recording everything, we can fast forward through the commercials and slow-mo through the fast action. Watch Lara Croft: Tomb Raider in slo-mo and I guarantee you, you will see things that went right past you the first time. Like those slick, super-fast reload clips for her guns. Angelina rules. But the TiVo is most definitely not real time, even in the airline reservation system sense. It records TV onto a hard disk in real time, and pulls it back off, too. But the user interface is positively glacial. How long can it take to bring up a pop-up menu? Try a TiVo, and you'll find it's a lot longer than you thought. What's the “RTOS” in the TiVo? It's Linux. Not real-time Linux (RTLinux). Just plain Linux. I think maybe each time I press a button on the remote, it's logging me in as a new user. Or something like that. Why use Linux for a set-top box? Well, for one thing, it comes complete with the GNU tools, which surely must be a big plus to the developers. Then there are the databases, which contain a list of channels and their program lineup. An SQL engine to access them must surely be nice. Still and all, I think the choice was a lot more pleasant for the developers than for us poor users. RTLinux? In previous columns, you've heard me say that I was looking forward to working with RTLinux. I've seen it working in the demos at the Embedded Systems Conferences, and it seems pretty nice. However, especially after seeing some of the uglier sides of Linux, I'm beginning to have second thoughts. Recall that the developers solved the problem of RTLinux by first writing a real-time kernel. Then they overlay Linux, including the Linux kernel, on top of it. In effect, Linux and its kernel are running as an application on top of the real-time kernel. It seems like a neat enough solution, and it certainly works well in demos. But think about it from a structural point of view. When one is developing a kernel for an OS, surely there has to be something better than developing two kernels, one on top of the other. Surely some efficiency has to be lost in the process. You know me: I'm a big fan of Linux, just as I was of Unix, if only because it's a better way to go than the obvious alternatives. But I'm not a fanatic about it. I tend to judge systems by how well they work, not whether I like their heritage. And face it. The heritage of Linux is not exactly earth shaking. It's a clone of an old time-share system that was, itself, a clone of an even older time-share system-Multics. That Linux has come so far, and done so well, is a testament to the ingenuity and efforts of the people involved in its development. But the Multics/Unix heritage can be burdensome. Backward compatibility constrains solutions that might otherwise be solved in other ways. Aside from its open-source nature, which is certainly a big plus, Linux has a lot going for it in that wonderful, GNU toolset. I'm just not convinced that RTLinux is going to be efficient enough. Hey, if I'm questioning the need for even an RTOS of any kind, I can't exactly recommend the heck out of RTLinux, can I? Is there an alternative? Perhaps so. We have a couple of true RTOSes that are also in the open-source tradition. One is C/OS, from Jean Labrosse. The other is Red Hat's eCOS. How's this for an alternative to RTLinux? Start with either of the true RTOSes. Add a compatibility layer that will emulate not the Linux kernel itself, but the view that the applications see of it. That way, Linux applications, including the GNU toolset, will work on the system, but we can also access the RTOS kernel for our real-time apps. Think about it. At least one variant of real-time Linux () does not use the kernel-atop-kernel approach. I just learned about this one and will be looking into it in the near future. If any readers know of other alternatives, please e-mail me. Interrupt-driven I'm going to close this month's offering with a really slick little real-time, cyclic exec that I ran across years ago, and fell in love with. It's just like one I mentioned earlier, with one important exception: loop accept RTC do 1000Hz enable interrupts if divide_by(2) do 500Hz endifend loop See that third line, enable interrupts ? That's the key to the whole thing. Once the high-speed stuff is done, the interrupts are enabled so that another one can come in. Then the slower tasks can go about their business, happily taking their time, while the high-speed task gets interrupted again. In general, when we write an interrupt handler, we want to leave the interrupts disabled for the minimum amount of time. This reduces jitter to a minimum. But in this case, the entire program is, in effect, an interrupt handler! It works because interrupts are enabled again, as soon as possible. The tasks are themselves reentrant, so all of them, except the bottom level, can be called more than once. Customers of mine have often had trouble understanding this architecture, and I've tried to think of simple diagrams to show the behavior of the design. The best I've been able to come up with is the analogy of a pinball machine, with countdown counters in place, as in Figure 1. Figure 1 Real-time pinball Imagine that a ball enters the pinball machine at the top, passing through the 1,000Hz task. It then drops through to the bottom. But every second time, it's diverted to the 500Hz task. Similarly, in the 500Hz task, most of the balls fall through, but one out of five is diverted to the 100Hz task, and so on. Once the system gets going, we can conceivably have different balls in various stages, passing through every single one of the tasks. If the code is truly reentrant, as it should be, we could even have multiple balls in a given path. Our only requirement is that the balls (interrupts) don't come in so fast that the tasks choke on them. The entire program is, in effect, an interrupt handler, and every bit of it is reentrant. Remember how an interrupt works. When an interrupt comes in, the current context is saved on the stack, and the handler starts to work. In this case, if another interrupt comes in, that context is also stacked, and yet another instance of the handler is started. We only require that the average rate of interrupts is low enough to keep the stack from overflowing. It's a slick solution, and I've seen it used in more than one life-critical system. In those systems, we used switches to make sure that task overruns didn't occur. Each was a simple semaphore that got tested and set at the top of each task, and cleared at the end. But even that mechanism is more stringent than it needs to be. As you can see from the discussion of stacked contexts above, multiple instances of a given task can be working, as long as the stack doesn't grow forever by too many interrupts. A simple test on the stack depth can detect that condition. The design has the features I mentioned earlier: passing data between tasks takes a minimum of handshaking, and, in the low-to-high speed direction, takes none at all. Keep it simple So what's my point? Am I trying to put all our RTOS-vending advertisers out of business? Am I anti-Linux? No, not at all. I'm simply saying, let the punishment fit the crime; let the solutionfit the problem. I'm a big believer in the KISS principle(Keep It Simple, Sam). Albert Einstein said, “Things should be made as simple as possible, but not any simpler.” Or, as the great race car designer Harry Miller put it, “Simplify, and add lightness.” Maybe that new embedded gimcrack you're building doesn't really need full-up Linux, or RTLinux. Maybe a good, solid RTOS will do. Maybe it doesn't need an RTOSa cyclic exec will do. Maybe even a simple while-loop and some interrupt handlers will do. Don't be in such a hurry to start, that you forget to do your build/buy study. You get no points for building the world's most complex alarmclock. Jack W. Crenshaw is a senior software engineer at Spectrum-Astro in Gilbert, AZ. He is also the author of Math Toolkit for Real-Time Programming , from CMP Books. He holds a PhD in physics from Auburn University. Jack enjoys contact and can be reached via e-mail at .
https://www.embedded.com/mea-culpa/
CC-MAIN-2021-43
refinedweb
5,733
72.16
Contact * Donate * Jokes * LMNOP aka dONW7 * Poetry * Store * T-Shirts * Review Policies August 2012 Comics, Poems, Jokes, and Reviews from aka Additional Items Received * * * * * * * * * * * * * * * * * * * * JOKES (August 2012). * * * * * * * * * * * * * * * * * * * * * * * * * Rosie Abbott - Rosie Abbott (Independently released CD-R, Pop) The debut album from Nottingham's Rosie Abbott. The simple unassuming package we received gave no indication of the quality of the music on this album. Abbott sent out very reserved promotional packages...apparently leaving it up to listeners to decide for themselves how they feel about her music. It only took a few seconds for this wonderfully talented lady to win us over. Rosie writes upbeat mid-tempo classic songs with pleasant melodies and cool breezy arrangementss...and she has a FANTASTIC voice (!!!). We can't help but think this ultra-creative musician will quickly be picked up by some company or label that realizes the potential inherent in her music. We totally dig warm fuzzy cuts like "If Everything Was Up To Me," "Hard To Sleep," "Unfathomable (First Light)," "If You're Happy And You Don't Know It," and "A Year To Remember." Groovy stuff that occasionally veers into unexpected territories. Totally...NIFTY STUFF. The Abe Lincoln Story - What Time Is It? It's Story Time With The Abe Lincoln Story (Independently released CD-R, Pop) The third full-length release from California's The Abe Lincoln Story. This band began way back in 1994 as the solo project created by vocalist/songwriter Steve Moramarco. The band is apparently a vehicle for Steve and his friends to have some fun whenever the mood strikes...which may explain the long delay between audio releases. From the sounds we're hearing on What Time Is It? It's Story Time With The Abe Lincoln Story it sounds like these folks really enjoy what they do. This is one big band. Joining Moramarco on this album are Kristian R. Hoffman, The Millionaire, Pat Hoed, Charlie Woodburn, Paul Litteral, Damon Zick, Rial Gallagher, Deena Rubinson, and Karen Zumsteg (whew!). These soul-infused danceable pop/rockers are peppered with plenty of cool rhythms, zesty horns, and a determined sense of humor. Twelve cuts here including "Marco Polo!", "The Scum Always Rises," "The Problem of Time," and "Given Circumstances." Provocative thought provoking stuff... ABNORMAL Normal is Abnormal. Adrian Benavides - Same Time Next Life (CD, Unsung, Pop) Dark industrial progressive rock with unusual twists and curves. This is the debut full-length release from Adrian Benavides which was written and produced in conjunction with Markus Reuter. The compositions on Same Time Next Life immediately reminded us of the strange warped industrial rock bands that were treading around the United States in the mid to late 1990s. Adrian's songs have a strange slightly atonal sound...and they features some rather unorthodox arrangements. We rarely hear artists delving into this genre lately...not sure why...? In any case, these deep driving thunderous tracks are seriously intense. And the louder you turn 'em up the better they sound. We also dig those way groovy detached vocals. Six odd cuts here including "Impulse Response," "Exterior of a Heart," and "Same Time Next Life." Intense stuff that packs a punch. Bend Sinister - Small Fame (CD, FU:M (File Under Music), Progressive pop/rock) If you dig the groovy sounds of staccato keyboard-driven 1970s pop...there's a good chance you'll go ape over Vancouver's Bend Sinister. We were impressed with these guys' last EP (On My Mind) that was released earlier this year. But Small Fame seems to be the album where the band has really found their sound. The smart and inventive songs on this disc sound something like a cross between the poppy side of Queen mixed with the more soulful side of The Guess Who. The trademark here are keyboardist Dan Moxon's effervescent vocals. The man can really sing circles around the common everyday twenty-first century vocalist. Small Fame packs quite a punch...and will probably be the album that puts these guys on the map. They've got a sound and style that is likely to catch on with millions of listeners. The more we spin this the more we dig it. Twelve clever inspired cuts including "She Don't Give It Up," "Don't You Know," "My Lady," "Give It A Rest," and "Quest For Love." Top pick. Bona Head - The Path (Independently released Italian import CD, Progressive pop) Impeccably produced smooth modern pop with orchestral arrangements. Italy's Bona Head isn't creating artsy difficult music for the underground crowd. The Path offers smooth melodic smart modern pop tracks that could easily be appreciated by millions of listeners. The band is the one man project created by Roberto Bonazzoli...a young fellow with a wonderfully resonant sexy voice that should drive girls W-I-L-D. We reviewed the last Bona Head album Colours Doors Planet back in May 2011 and we were impressed by what we heard at that time. Roberto has apparently been fine tuning his sound since then and his music seems to have even greater depth and focus. The Path features fifteen well-crafted tracks that should please even the most discriminating listeners. Cool reflective cuts include "The Path," "Dead Zone," "Relax," and "The Perfection (Epilogue)." BREATH DEATH Shimmering wafers Of predictive breath Willow and wander Toward ultimate Death. Brian Boggess Group - Debut EP (Vinyl EP & Digital download, Midnight Snack, Pop) Brian Boggess Group start off with a bang here with the release of their oddly titled Debut EP. This smart disc features four hard-hitting pop tracks played with guts and conviction. None of that limp-wristed stuff here, these guys play like they mean it. These songs were recorded on analog equipment by engineer Geoff Sanoff at Stratosphere sound, so you know the sound quality is superb. Boggess and his associates play classic pop tunes that are amped up and inspired. We find it particularly interesting that the second song ("Jack Knife") sounds very much like the 1970s British progressive band Greenslade (particularly the vocals). These guys are doing everything right here. After hearing this, we can't wait to hear the follow-up... Brave - Original Soundtrack: Original Score Composed by Patrick Doyle (CD, Disney, Film soundtrack/score) This soundtrack consists mainly of instrumental compositions but also features five tracks with vocals. Patrick Doyle is a classically trained Scottish composer who is best known for the music he has composed for several Kenneth Branagh films. We've not yet seen the Disney / Pixar film Brave so we can't comment on the story and animation. Doyle has created some wonderfully magical music here. The soundtrack is flavored with a definite Scottish influence...and peppered with some very intricate and classy arrangements. Some of the tracks are pensive and subtle while others are rather busy and exciting. The sound quality is superb throughout...so you can bet your booties this one's gonna have a super huge sound when you see the film. Twenty perfectly executed cuts here including "Touch The Sky," "Remember To Smile," "In Her Heart," and "Merida's Home." BUTTER BIRD LUNG Put butter on a bird And place it on your tongue. Inhale the bird and butter Into your left lung. The Cynics - Spinning Wheel Motel (CD, Get Hip, Rock/pop) We begin this one being a bit biased...because we're longtime fans of The Cynics and we also generally dig stuff on the Get Hip label. What we love most about the guys in The Cynics is that they play music as if time is standing still. They don't change their sound and style to suit the changing tastes of the general public. They don't adapt to the latest technology and they don't use the latest samples. And they never ever sell out or churn out lame shit. Spinning Wheel Motel is another great addition to the band's catalog. The album features the cool melodic catchy guitar-driven pop/rock songs that the band's fans have come to expect. By keeping things simple and giving the listeners what they want, these guys have developed die-hard fans who will follow them indefinitely. There are absolutely no signs of burn out or regression here. This album is a fresh kick in the boo-boo. Some of the lyrics will definitely cause easily-offended folks to squirm (haw haw--you go guys!). We just love "I Need More," "All Good Women," "Rock Club," "Circles Arcs and Swirls," and "Zombie Walk." We could never get enough of the Cynics. They remain G-R-E-A-T. Top pick. Ryan Darton - I Am A Moth (Independently released CD, Pop) Ryan Darton has apparently had a life-long love affair with music. He grew up in rural Utah but now resides in Los Angeles, California. Prior to embarking on his solo career Darton was in the band Kid Theodore. Now a multi-instrumentalist, Ryan seems obviously poised to reap the rewards of his many years playing music. I Am A Moth may very well be the album that propels this guy's career to the next level. The songs are direct and instantaneously familiar...and they have that nice warm spark driven by a man who has truly found his sound, voice, and style. Some of these songs are powerful loud pop tracks while others are more somber and subdued. But whatever he does Ryan always comes across sounding inspired and convincing. He writes genuinely memorable melodies and he's got a voice that gives his songs an extra kick. A dozen well crafted tracks here. Our favorites include "Summer & Snow," "Camel's Back," "Shadows," and "Falling In Love." Top pick. Don Dilego - Western & Atlantic (CD EP, Velvet Elk, Pop) This is the fourth release from Don Dilego whose past three full-length albums have created quite a buzz. The Western & Atlantic EP is being released in advance of his next album Magnificent Ram A. This disc features seven smooth pensive tracks played with appropriate restraint and obvious style. Dilego's smooth mid-tempo pop should appeal to a wide cross-section of listeners. His songs have obvious commercial appeal...but they are a far cry from the crap pop that always seems to top the charts nowadays. After hearing this...we'll be waiting eagerly to hear the next album. Cool catchy tracks include "Midnight Train," "Chicago," a cover of The Replacements' "Here Comes A Regular," and "Carry On." Good solid stuff. Dirty Panties - I Am A Robot (CD, SquidHat, Pop/rock) As regular readers are always aware, we're just as interested in the motivation behind music as the music itself. As such, the girls in Dirty Panties immediately caught our attention for the plain and simple fact that they're obviously so immersed in what they do. I Am A Robot is a charged up album chock full of razor sharp power pop tunes that remind us of many of the cool alternative underground rock bands treading around the country in the mid-1990s. These fabulous ladies keep thing simple and direct, focusing on getting the point across rather than trying to bury their tunes in tidal waves of overdubs. Pure feelgood tracks include "Cheers," "Figment Of A Girl," and "I Am A Robot." Totally upbeat and fun. Down By Law - Champions At Heart (CD, DC Jam, Rock/hard pop) One of the leading pop/power rock bands from the 1990s returns with one HELLUVA kickass collection of hard rockin' tracks. Down By Law was originally formed by All's Dave Smalley in 1989. The band enjoyed a great deal of success but in the early twenty-first century they decided to throw in the towel. We're not usually too excited about reunions...but in this case we're totally won over and blown away. From the sound of the tunes on Champions At Heart you'd never know these guys had put their careers on hold. These songs have that great super fast loud pummeling sound that the fans loved...and are fortunately now loving all over again. We loved the excitement and energy of the 1990s so this one immediately knocked our brains out. Hard, fast, furious, and nervous...these sixteen tracks are a pure dose of infectious rebellion. Kickass cuts include "Bullets," "Knock This Town," "Crystals," and "Champions At Heart." This stuff totally rocks... TOP PICK. ECO GROAN Eco, eco, Eco groan. Leave the world Alone. Elk - Daydreams (CD, Indoor Shoes, Pop/rock) While we're always hearing hundreds of new pop, rock, and Americana bands from up north, rarely do we hear Canadian garage rockers. Not sure why, but for some reason most garage rock bands still tend to be located in the United States. The guys in Elk may be on a mission to change all of that, as Daydreams contains simple straight-from-the-hip guitar driven rock tunes without all the fussy frills that get in the way of the music. If we hadn't looked at the CD cover and press release, we would have guessed this was put out by Pittsburgh, Pennsylvania's Get Hip label. But that's not the case, this album was released by the folks at Indoor Shoes. Daydreams features twelve tracks with cool driving rhythms, guitars drenched in reverb, and vocals that have a decidedly retrospective sound. Nice genuine tunes here...and every single one of 'em has something tangible to offer. Our favorites include "Before The Sun," "If Not For You," "Memories," and "You Know." Cool sounding stuff with heart. Eric Erdman - My Brother's Keeper (Independently released CD, Pop) The debut solo album from Mobile, Alabama's Eric Erdman. Up to this point in time Erdman has been best known as a member of the band The Ugli Stick (great band name) who have been playing together since 1999. This year Erdman decided to present another side of his personality with the release of My Brother's Keeper. This guy has a smooth cool voice and he writes songs that could be appreciated by just about anyone. The songs are relatively simple and straightforward and feature lyrics that seem genuine and inspired. If you dig singer/songwriter stuff, there's a good chance you will dig Eric's music. Twelve solid tracks here including "Feel," "Bird On A Powerline," "Saltwater," and "Peanut Butter and Jealousy." FAKE BREAST There are no Real breasts. They are all Artificial. Every breast is a Fake breast. M. Fallan - Contagious (CD, Kicking, Pop) Nice progressive moody pop with a difference. Although we're not exactly sure why, we rarely receive music submission from French artists...and that's a shame because when we do hear them we're usually impressed. France's M. Fallan's music is not what you might expect it would be. This intriguing fellow doesn't write cute snappy dance tunes and he sings in English. His influences include Shanon Wright, Eliott Smith, Pedro The Lion, Chokebore, and Cat Power...so this should give you a general idea of where he's coming from. This is not a commercial-oriented album (i.e., there are no potential hits or possible singles). Fallan is obviously creating music as a pure form of self expression. He writes songs that don't follow traditional paths and formulas and he's got an interesting voice that covers a wide range of emotions. Ten impressive cuts here including "Withered Skin," "The Road To Ruin," and "Oblivion." Neat stuff from the fertile French underground... Federal Lights - Carbon (Independently released CD EP, Pop) Truly great pop songs from Canada's Federal Lights...the band created by Winnipeg's Jean-Guy Roy. We were immediately drawn to these songs because they remind us a great deal of babysue favorite Johnny Society. Roy's voice sounds a great deal like Kenny Siegal...and some of his songs sound remarkably similar. Only six tunes here...but they're all smart and focused and they all hit the bull's eye dead on. Our favorites include "Wake Up," "Camera," and "Carbon." We'll be eagerly awaiting the opportunity to hear what this guy does on his next full-length release... Jonathon Grasse - Phantomwise (CD, Acoustic Levitation, Modern classical/improvisation) Peculiar modern improvational music from guitarist/composer Jonathon Grasse. On Phantomwise, Grasse is joined by cutting edge experimental musicians Gustavo Aguilar, Cristian Amigo, Emily Hay, and Tom Steck. Anyone who is even slightly familiar with any of these artists has an idea of what to expect here which is...the unexpected. Jonathon and his associates play music that is spontaneous, experimental, and most likely unscripted for the most part. The compositions on Phantomwise are abstract, peculiar, and almost completely absent of any sort of commercial appeal. Many would find these songs to be difficult to absorb and appreciate...but that would be missing the point entirely. These folks aren't interested in coming up with catchy melodies...nor were they trying to create soothing background music. During these recordings these musicians were obviously feeding off one anothers' creative energies...which may perhaps explain the odd nature of the sounds here. Call it modern jazz...modern classical...or experimental...this album is simply strange and abstruse. Eleven perplexing compositions including "Beat Red," "Thank God It's Dydd Gwener," "Phantomwise," and "Clandestine Rotations." Graveyard Train - Hollow (CD, Spooky, Pop) The third full-length release from Australia's Graveyard Train. These guys have already become quite popular in their home country...and our guess is that Hollow will be the album that transfers some of that fame to the rest of the world. These seven guys play foot stompin' modern pop that incorporates elements of western music and horror films into the mix. These songs feature steady rhythms, reverb drenched guitars with a slight rockabilly sound, offbeat lyrics, and cool slightly loose sounding vocals. We had to spin this one a few times before we 'got' what was going on here. But we found that the more familiar we became with these songs that more depth they had. The first single from the album is "I'm Gone" and it has already received a good bit of attention in Australia. After hearing this, we'd say it's a safe bet that these guys put on quite a show. Eleven strangely effective tracks here including "Get The Gold," "I'm Gone," "One Foot On The Grave," and "The End Of The World." Peter Green Splinter Group - Blues Don't Change (CD, Eagle Rock Entertainment, Blues/rock) Just about everyone is familiar with the band Fleetwood Mac. Even those with only a casual interest in music would have found it hard to miss heaing the band's popular singles as they have been played to death over the past few decades. But even though the band is widely known, relatively few folks are probably familiar with the group's early recordings. When Fleetwood Mac began it was a very, very, very different band. The overall approach to music was so different that most people would not even think it was the same band. And when the group transformed into a pop music machine many of their original fans probably lost all interest. Recorded in 2001, Blues Don't Change features guitarist Peter Green returning to his roots. This album was originally only sold at concerts and through the band's web site. But thanks to the folks at Eagle Rock the album is finally seeing a proper commercial release. What is surprising about these tunes is how direct and clean they are. Instead of overblown overproduced blues (which is what we normally hear in the twenty-first century), you get songs that absolutely sound like the real thing. Joining Green on these tracks are Nigel Watson (guitar), Larry Tolfree (drums), Roger Cotton (keyboards), and Pete Stroud (bass). These guys perform songs made famous by classic artists like BB King, Willie Dixon, Albert King, John Lee Hooker, and more. Authenticity abounds here on cool cuts like "I Believe My Time Ain't Long," "Honey Bee," "Help Me Through The Day," and "Crawlin' King Snake." Good solid stuff. The Grip Weeds - Speed of Live (CD, Ground Up, Pop/rock) We've been big fans of this band since we first heard their brand of intoxicating semi-psychedelic pop many years ago. We generally prefer studio albums to live albums so we approached this one with some reservations. Whatever reservations we had were immediately dispersed as soon as we heard the band launch into "Every Minute"... The Grip Weeds are obviously first and foremost a live band that packs a major power punch. We love these folks' studio sound...but after hearing this, we might actually prefer the rawer live sound. Whatever the case, at this particular concert taped in New Jersey the band was obviously on a roll. These fifteen tracks are forceful and powerful...and fans will most certainly be impressed to find that those wonderful vocal harmonies remain perfectly intact here. If you ever loved bands like The Beatles, The Who, The Nazz, or Redd Kross you owe it to yourself to get your hands on this album. There's also a DVD release of the concert that we haven't seen yet. If you're a Grip Weeds fan, you probably already have this one. If you aren't familiar with the band, this is an excellent starting point. Kickass rockin' cuts include "Salad Days," "Strange Change Machine," "Infinite Soul," and "Astral Man." This band just keeps getting better and better... TOP PICK. Hand To Man Band - You Are Always On Our Minds (CD, Post Consumer, Progressive/experimental/rock/jazz) If you're looking for something familiar and easy, you won't find it here (!). This is one of those albums that was created purely out of inspiration and creativity. The artists don't seem to be motivated by anything except the desire to express themselves. And boy oh boy do they express themselves on You Are Always On Our Minds. Hand To Man Band is the quartet comprised of Mike Watt, Thollem McDonas, John Dieterich, and Tim Barnes. These guys have come up with a real winner of an album...and it certainly does not sound like all the rest. The tracks on this CD combine elements of progressive rock with experimental and jazz...and ultimately have their own peculiar sound and style. Seventeen cuts here that clock in at just over 40 minutes. Some of these compositions are instrumentals while others feature vocals and/or voices. Bizarre stuff that is surprisingly listenable and warm. We love it when folks extend boundaries without alienating everyone in the process. This album is truly inspired and effective. Our favorite cuts include "Forces Conspiring," "We Learned The Unreasoning," "Thinks This," "They Pretty Right" (great song title, that one...), "Semina System," and "Thin Incision Split Decision." Top pick. Peter Hannan - Rethink Forever (CD, Artifact Music, Progressive/choral/electronic) In our many years listening to and reviewing music, we've never heard anything quite like this before. Canada's Peter Hannan combines two worlds that rarely collide: choral music and electronics. The result...is a strangely bewildering and hypnotic experience not to be missed. We have to admit that in some ways this album gives us the feeling that we're listening to two different artists and/or albums at once. Rethink Forever is divided into four main sections: "Rethink Forever," "The City of Granada on the Surface of Mars," "Happiness Index," and "No Brighter Sun: No Darker Night." Our guess is that most listeners won't know what to make of this. When you delve into arenas that have not yet been explored, there are bound to be some strange circumstances. We can't compare this to any other artists because the music stands squarely on its own. Interestingly different and strangely provocative... Heart - Strange Euphoria (CD Box Set, Epic Legacy, Pop) Before we begin here, we have to admit right off the bat that we never cared for the band Heart. The songs never pushed our buttons and we never cared for the vocals. Somehow the music always came across seeming calculated and contrived. That said...Strange Euphoria has a lot to offer the band's fans. The beautifully packaged box set includes three CDs, a DVD, and a booklet featuring tons of photographs and reflections written by Nancy Wilson and Ann Wilson. We expected this to be a mere collection/overview of the band's music...but we were surprised to find the CDs chock full of different versions of songs including plenty of rarely heard demo recordings. So instead of simply re-treading tracks you've heard time and time again...this set offers a great deal of additional insight into Nancy and Ann's musical universe. If you didn't like Heart this set probably won't win you over. But if you ever liked or loved the band...our guess is that this is a treasure trove of audio delights that are a MUST HAVE... Caroline Herring - Camilla (CD, Signature Sounds, Pop) The sixth full-length release from Caroline Herring. This young lady has already carved her own niche in the world of music. She's a critic's favorite and her music has popped up in some rather prestigious places. Camilla will no doubt fans the flames of her career even higher as this ten track album probably captures this captivating young artist at the zenith of her career. Her songs should appeal to anyone who ever appreciated Alison Krauss...although she is by no means a copycat artist. Caroline's songs combine elements from folk, bluegrass, and pop. She wrote nine of the ten songs on this album...and they're all focused and rather brilliant. She has a great voice and comes across sounding like an artist who truly loves making music. This is pure stuff... written from a place in the heart where things actually matter. After hearing this, we can see why so many folks are singing this lady's praises. She's definitely a genuine and real classic artist with a great deal to offer. Our favorite cuts include "Camilla," "Fireflies," "Summer Song," and "Joy Never Ends." Highlands - Singularity (Independently released CD EP, Hard shoegaze pop/rock) The four guys in Highlands aren't releasing too much information about themselves...apparently because they want to focus folks' attention on the music itself. A smart move...because the seven tracks on this self-released CD-R speak for themselves. These fellows play a slow, hard, driving shoegazer type of pop/rock that is heavily infused with effects and psychedelia. It's kinda like mixing acid bands from the 1970s with dazed out shoegazers from the 1990s. Merging these styles works surprisingly well...and playing with that extra muscular punch adds just the right amount of ultra zest to the music. This is a short disc clocking in at just over half an hour...but in that amount of time these guys managed to blow us away. Cool heavy rockers include "Railroad," "Evil," "Sunshine," and "Brain Drain." This band is one helluva heady experience...! Adam Hill - Two Hands, Tulips (Independently released CD, Pop) The last time we heard from Canada's Adam Hill was way back in 2009 when we covered his album Them Dirty Roads. The album did quite well in underground circles and afterward Hill spent many months touring to promote the record. But the urge to create eventually set back in...so he dissolved the band and retreated to a small town on the Pacific ocean to concentrate on writing and recording. The result...is Two Hands, Tulips. Whereas the last album fit into the folks/roots category, Roads is more difficult to pigeonhole. The songs still have folky/roots elements...but they also have pop sensibilities...and there are some unexpected sounds and ideas laced into the arrangements. This time around Adam reminds us of what Ben Folds might sound like if he played folky pop and was more experimental. The emphasis here is on songs...lyrics and melodies...and all of these well-crafted tracks hit the target dead center. Hill is a man who isn't creating music to impress or make money. These songs were created by an artist who is genuinely inspired to make music. There are plenty of cool threads of genuine-ness running through these tunes that are most appealing. Thirteen keepers here including "Sarabande I, "French Films," "I Shall Not Be Moved," and "She Heard A Sound." Cool stuff...REAL. Steve Hillage Band - Live at the Gong Unconvention, Amsterdam 2006 (CD, G-Wave, Pop) We've been Steve Hillage fans for such a long time. While other kids in high school were tripping their brains out listening to Pink Floyd and Jimi Hendrix (both of which were and are completely valid)...we chose to enter other dimensions by listening to the music of British psychedelic artists like Steve Hillage and Gong. We've always felt that this man was one of the greatest psychedelic guitarists of all time. Instead of burning out or becoming an acid casualty, Hillage has continued his career right on into the twenty-first century with his bands System 7, Mirror System, and Steve Hillage Band. No matter what he's done or what he's doing, Steve always creates music that hits the right spot in our brains and hearts. And the reason...is probably the fact that Hillage has always remained true to his vision and creates music for all the right reasons. Live at the Gong Unconvention, Amsterdam 2006 captures Steve playing tunes from the past with his associates Miquette Giraudy, Mike Howlett, Chris Taylor, Basil Brooks, Paul Francis, Andy Anderson, Dave Stewart, Didier Malherbe, Daevid Allen, Lawrie Allen, and Tim Blake (whew!). The first six tracks are from 2006, the remaining four are 1970s recordings from the vaults. The album begins with the ultra-catchy "Hello Dawn" (from the criminally overlooked Motivation Radio album) before quickly switching to a version of The Beatles' "It's All Too Much." The band then ventures into well-known material from the classic Fish Rising album. It's interesting to compare the new live sound with the vintage recordings...but both are absolutely valid and credible. There is also a DVD release of this 2006 concert that we haven't seen yet. We gleaned from the press release that Hillage may be preparing for another voyage into progressive rock in the near future...we sure hope so. This incredibly talented man has never received the mainstream worldwide recognition he so obviously deserves. Steve Hillage was...and is...one of the best. His music and sense of creative energy remain completely intact... TOP PICK. The History of Panic - Fight! Fight! Fight! (Independently released CD, Pop) Unbelievably fresh and vibrant upbeat modern pop from Gerald Roesser...the man who is The History of Panic. If you like danceable pop with a definite positive groove...there's a good chance you'll pop yer top over the tunes on Fight! Fight! Fight!. Gerald recorded the album in his home studio over the course of two years. These tracks are slick and well-produced...but they don't have that overblown sound that ruins so many new commercial releases. Folks adding their vocal talents to the proceedings are Leah Diehl (Lightning Love), Casimer Pascal (Pas/Cal), Trevor Naud (Zoos of Berlin), and Keith Thompson (The Electric Six)...whew! Roesser's influences include Junior Boys, Phoenix, Daft Punk, and The Smiths...and we can hear traces of all of these in his music. If you like cool techno-driven pop with a heavy emphasis on intoxicating beats...these tracks might just send you up to heaven. Eleven kickass cuts including "Out of Control," "The Devil's Boredom," "History," and "Love's Disaster." Totally cool in so many ways... TOP PICK. INSIDE YOU You don't Ever want anything Living inside You. Kasabian - Live at the 02 London 15/12/2011 (DVD + CD, Eagle Rock Entertainment, Pop/rock) Prior to receiving this we had never even heard of the band Kasabian. Can you believe it? And then of all things we have the nerve to call ourselves music reviewers...ha! This band is obviously hugely successful, as is purely evidenced by this concert captured on DVD in 2011. We usually get in on the ground floor with new bands so seeing a band that has already made its way up the ladder and plays for thousands of people is a bit unusual. Our initial reaction is that this band's music sounds something like a cross between Black Rebel Motorcycle Club and Frankie Goes To Hollywood (more the latter than the former). They've got some good songs and they're good performers...but we have some reservations about these guys. Actually the main (and only real) reservation is the lead singer. Not only does he have one of those pointy haircuts...but he performs like a Bono-esque cheerleader...always inviting the crowd to yell and cheer and crap like that. Kinda irritating. Another thing that detracts here is that there is such an emphasis on lasers and lights and backdrops that it really makes it hard to focus on the band. The pluses...are that the lasers and lights and backdrops are really really cool. Another big plus...is that the audience was obviously loving every minute of this show. So we can definitely tell that the fans got what they wanted. There are lots of things to like about this band...some of the songs are neat...the guitarist is great and has a cool voice...the drummer kicks ass...and the bass player really pumps. In addition to the DVD this set also includes an audio CD (although all of the tracks from the live concert are not included). This didn't win us over on this band...but it does make us want to go back in time and experience what they were like before they met with big time success... King of Spain - All I Did Was Tell Them The Truth And They Thought It Was Hell (CD, New Granada, Pop) King of Spain is the Tampa, Florida-based duo comprised of Matt Slate (vocals, guitar, programming, synths, percussion, strings) and Daniel Wainright (bass, programming, vocals). The band was originally the one man project created by Slate...but in 2009 Wainright joined the band. The strangely titled All I Did Was Tell Them The Truth And They Thought It Was Hell is surprisingly accessible and resilient and features ten well-crafted pop songs that are moody, haunting, and rather beautiful. These two guys' voices blend together seamlessly...and they come up with some exotic unorthodox arrangements that give their songs a cool dreamy ambient feel. This album will no doubt be a favorite among fans of the underground in no time flat. Smart provocative cuts include "Basement Fires," "Green Eyes," "Perception," and "Seamless, Spotless Sidewalks (Redux)." Top pick. Ben Levin Group - Invisible Paradise (Independently released CD, Progressive) There are so many bands out there regurgitating the sound of 1970s progressive rock bands...and many of them fail miserably. Probably because they're trying to recreate something that is a thing of the past. Ben Levin and his associates bring the ideas and feeling of progressive rock from the past squarely into focus...by totally reinventing the genre on their own terms. Invisible Paradise is...an overwhelming and intensely complex musical production featuring some bewilderingly difficult stuff for twenty-first century music addicts. Anyone who ever appreciated music by bands like Yes, Emerson, Lake, & Palmer, and King Crimson is likely to become instantly addicted to this album. It's so intense that it comes across more like a state-of-the-art film soundtrack than an audio album. Thirty-eight minutes of music that you won't soon forget...WHEW! Lola Versus - Original Motion Picture Soundtrack: Music by Fall On Your Sword (CD, Lakeshore, Soundtrack) This is a different sort of soundtrack with a different sort of sound and feel. The music for the film Lola Versus was composed and created by the band Fall On Your Sword. The band is actually the duo of Will Bates and Phil Mossman...two guys who have already made quite a name for themselves as master multi-media composers. These instrumental tracks incorporate a wide variety of sounds, styles, and influences. Rather than tie themselves to one tired genre, Bates and Mossman seem to let music take them wherever it may...which makes for some rather inventive compositions. Part of what makes this soundtrack so appealing is the fact that these tracks are not overproduced. These guys had the good sense to keep things simple for the most part...concentrating on the songs and melodies instead of layering everything to death. Pensive, slightly puzzling, and always on target, this is a soundtrack that that can also be enjoyed purely for its own merits. Cool reflective cuts include "Beach Dream," "Encounter at Pratt," "Walk To Water," and "End Titles." Jason Masi - Life Is Wonderful (CD, Reel Works Media, Pop) More slick melodic commercial pop from Jason Masi. Unlike a lot of other artists you see splattered around on these pages, this young fellow isn't playing for a small esoteric audience...he's recording ultra-familiar sounding pop music aimed squarely at a mass audience. Masi used to live in Virginia where he made a name for himself fronting the band Jubeus. After the band split up he moved to Washington, D.C. in 2010...and now seems poised for an ultimately rewarding solo career. This guy writes and records the kind of classic pop that most fans know and love. His songs combine elements from folk, soul, blues, and pop into an instantly satisfying mix. These songs don't require a lot of thought or concentration...just a desire to kick up your boots and have a good ol' time. Eleven clean cuts here including "The Right Kind of Things," "That Summer," "People," and "Life Is Wonderful." Men In Black 3 - Music by Danny Elfman (CD, Sony Classical, Soundtrack/score) If there's anyone out there who has developed their own unique sound and style in the world of music for film, it must surely be Danny Elfman. This man initially blew us away with the music he composed for The Nightmare Before Christmas...and we've been blown away ever since. The score/soundtrack for Men In Black 3 features huge orchestrated compositions that will surely add the needed dynamics to this film. Like previous Elfman projects, these pieces are pure magic. They're pensive...exciting...exotic...thoughtful...and most importantly exceedingly precise. It is Elfman's attention to detail that has probably made him one of the most in-demand composers in the world today. Twenty-two mind bending tracks here including "Spiky Bulba," "Out On A Limb," "True Story," and "Mission Accomplished." Haven't seen the film yet...but the music is an obvious Top Pick... Dan Miraldi - Sugar & Adrenaline (Independently released CD, Pop) A straight shot of instantly catchy guitar-driven pop/rock from Cleveland, Ohio's Dan Miraldi. Many of the songs on Sugar & Adrenaline have a heavy 1970s influence. Dan isn't afraid of writing songs with hit potential. In fact, this album spins like a "best of" collection even though none of these songs are hits...yet. The crystal clear direct approach works, and Maraldi's got a voice and presence that really gives his music extra zest. There's plenty of personality to be found in these tunes...this young man's obviously getting a major kick out of making music. Dan was influenced by several classic artists which may explain the commercial-oriented sound of his own music. We can hear traces of tons of other artists here including (but not limited to) Nick Lowe, Mott The Hoople, The Chainsaw Kittens (?!), Richard Hell, and even Joel Plaskett (!). An entertaining solid album from start to finish. Cool cuts include "Few Rock Harder," "Road Warrior," "Record Collection," and "Revenge." Music From The Film - Vi Kommer Til A Fa Deg (CD, Zero Moon, Experimental) When you consider the state of commercial music today...you may tend to think that music is no longer relevant for listeners with the ability the think. Commercial music has sunk so low that there's hardly anything listenable that ever even sinks into mainstream consciousness. But there are always things happening underneath the surface. And although it takes some effort treading around the internet to find it, there is more creative credible music being made now than ever before in the history of mankind. Music From The Film is the duo of Gary Young and Arthur Harrison...joined on this album by Kevin Buckholdt. We love the introductory statement on the press release that says these guys "...have been basking in obscurity together for over 22 years." It's a statement about modern music. If you make music that is creative and credible...it will almost certainly be appreciated by only a very few. But popularity isn't really the point though...is it? Folks with a conscience and soul aren't making music for money or fame anyway. They're making music out of a pure genuine desire to create and express themselves. As such, the strangely titled Vi Kommer Til A Fa Deg is a complete success. Sure, the average listener would be frightened off by these tracks. But that's not who these guys are playing for anyway. Twenty-two heady peculiar and inventive cuts. Cool. Nervebreakers - Hijack The Radio! (CD, Get Hip, Rock/pop) Bright punchy upbeat pop/rock from the Nervebreakers. These guys were originally out there in the world rocking in the 1970s and 1980s and one of the first bands of their type to be treading around the Southwestern United States. Hijack The Radio! is a compilation of some of the band's original recordings from the 1970s. The songs are simple and direct, the guitars loud and in-your-face, and the rhythms driving and steady. The band has just reunited...which should result in some interesting new recordings in the very near future. Also pending are more Nervebreakers reissues from the folks at Get Hip (!). This classy fifteen cut album features plenty of heartfelt rockers including "My Girlfriend Is A Rock," "So Sorry," "Part Of My Love," and "Strange Movies." Normal Love - Survival Tricks (CD, Public Eyesore, Experimental/rock/progressive) Folks into obtuse nervous jerky experimental music, listen up. Those who don't...should consider immediately covering their ears and running for the exit. Normal Love is a weird band. A very, very, very weird band. According to the press release, this band creates music by "...bypassing existing standards and inhabiting a strobing black-lit world where apocalyptic decadence meets the iron fist of Draconian law." Whoa. That may be a strange descriptive passage but it gives an adequate description of the tunes on Survival Tricks. This isn't atonal noise because there is order here. It's just a different sort of order with lots of oddball curves being thrown at the listener all at once. Imagine Yoko Ono fronting a group of Modern Classical musicians with a rock band playing in the foreground...and you might begin to have an idea of the overall sound. These folks are playing for a very small audience...and they obviously don't give a rat's ass about giving people what they want. In our odd little book of wisdom, that is a very good thing. Eight bizarre cuts here including "Lend Some Treats," "Breathe Through Your Skin," "Cultural Uppercut," and "Cosmetic Rager." This music is a wildly unorthodox trip... The Odd Trio - Birth of the Minotaur (Independently released CD, Jazz/progressive rock/instrumental) In all of our years covering music we can't recall having ever received any jazz from Athens, Georgia. The appropriately titled Odd Trio is just that...three guys with an odd sound and approach to music. The band is comprised of Brian Smith (guitars), Marc Gilley (saxophones), and Todd Mueller (drums). To be more accurate, these guys aren't just playing jazz...they play a fusion of styles that incorporates ideas from jazz, rock, and progressive...all combined into a heady mix that is surprisingly accessible and fluid. The playing here is impeccable...and sure to impress even the most discriminating listeners. The tunes on Birth of the Minotaur run the gamut from smooth and safe...to wildly improvisational. Twelve groove-oriented cuts including "Raucous Bacchus," "Deckard's Dream," and "Whiskey." Peculiar and intriguing. Odetta - 7 Classic Albums Plus Bonus Radio Tracks (CD, Real Gone Folk & Roots, Folk/blues/spiritual) We always wanted to experience the early recordings of Odetta...but it was always somewhat expensive and time consuming trying to obtain all her early albums because she's never been that well known and therefore her discs aren't scattered all over the place. We decided to take a chance and order this four CD set because of the price. Seven albums (?!!) plus bonus tracks...for TEN BUCKS (and that INCLUDED shipping)...??? It seemed too good to be true but...in this case, we are pleased as punch. We didn't have any of Odetta's early albums before...and now we're plowing through a whole stack of 'em. And they still sound great. Man oh man what a voice... Some of the tracks sound kinda dated because of the arrangements... But overall, this is quite a package and it's MORE than worth what you pay for it. Which brings up the question...should these types of sets be allowed to be sold? Apparently in Europe once music is over 50 years old companies can duplicate and sell it without paying any royalties. Or at least that's what we read on the internet (???). If that is the case, it seems sad that the artists (and/or the artists' families) don't receive ANYTHING. There may be challenges to this kind of rampant reissuing...but in the meantime, don't be shocked to see incredibly CHEAP multi-album packages by artists like Elvis Presley, Chet Atkins, Gene Vincent, Les Baxter, Julie London and more popping up all over the place. Apparently some of the packages are better than others so you may want to read some reviews before buying. Some folks have reported albums being in the wrong order chronologically, internet sites not being able to read the tracks, etc. etc. etc. And the packaging is bare bones, of course. But if you want to pick up some great music from the past for almost nothing...just think of all the great albums that were released in the late 1950s and early 1960s...that are just now old enough for folks in Europe to repackage and sell... Hell, there's gonna be an ONSLAUGHT of great stuff hitting the market at BARGAIN BIN prices... Ray Parker - Swingin' Never Hurt Nobody (Independently released CD, Jazz) Smooth clean inviting jazz from Ray Parker. Ray learned a lot from his father Gene Parker and doesn't mind admitting it. Swingin' Never Hurt Nobody is a cool trip into modern minimalistic jazz. Joining Parker on this album are Russell George (violin) and Jon Hart (guitar). Together, the three present some mighty tasty and direct compositions that are sure to please just about anyone who appreciates jazz trios. Parker's bass is inventive and warm and George and Hart provide the perfect accompaniment. We like where Ray seems to be coming from mentally. Instead of preparing set lists for live shows he "reads the audience" to determine what will best suit each situation. How cool is that? Nine fine tracks here including "Guitar Sammich / Now's the Time," "Always," "Zingaro," and "Goodbye." This one hits the spot. People Like Us - Original Motion Picture Soundtrack: Music by A. R. Rahman (CD, Lakeshore, Soundtrack) This is a film about average people and genuinely real situations...so it will probably be a major flop (heh heh heh...). Seriously though...because folks are so enamored with special effects and overblown productions, in all honesty a film like this will probably be overlooked. We haven't seen People Like Us yet...but it sounds like it is probably a credible and effective film that deals with interpersonal issues between friends and family. The music seems to mirror the events of everyday life...well crafted by composer A. R. Rahman. This is not the typical overblown Hollywood soundtrack. These tracks are subtle, poignant, and smooth. You could either pick this up because you dug the film...or just because you wanted some cool reflective relaxing music to put guests at ease. Eighteen attractive cuts here including "People Like Us," "Beat The Living," "Six Rules," and "I Am Your Brother." Philippe Petit - Eugenie (Limited edition CD-R, Alrealon Musique, Experimental/instrumental) France's Philippe Petit is easily one of the most prolific recording artists of the twenty-first century. This creative fellow records and releases two or three times the amount of the average musician. Eugenie is being released as a 10" vinyl release and as a download...so apparently there is no traditional CD release of this particular album. The disc is divided into four sections: "An Air of Intrigue," "Clapoutique," "Pyramid of the Moon," and "Magma From the Aquarium." Petit prefers to be called a 'musical travel agent' rather than a composer...and that particular fact may help to explain and describe his music. Philippe creates music as a pure artistic statement...seeming to have little regard for how the music is perceived after it is created. We love the pure side of creativity...so this man continues to blow us away with each and every release. These four tracks will indeed transport you to another level and/or dimension...which is most likely what it was designed to do. These odd compositions combine so many sounds and styles that it is difficult to try and come up with adequate ways of explaining and/or describing them. But whatever it is...it works. Beautifully strange and abstract. Top pick. Piranha 3DD - Original Motion Picture Score: Music by Elia Cmiral (CD, Lakeshore, Soundtrack) This is a film that instantly caught our attention when we first saw the previews. A movie about people...getting torn apart by piranha...? All RIGHT! We haven't seen it yet but...we certainly will soon. The score for the film was composed by Elia Cmiral who was born in Czechoslovakia but moved to the United States in way back in 1987. Since that time Elia has created music for tons of film and television projects...and there doesn't seem to be any project he won't take on. This score features highly orchestrated compositions with that great big huge theater sound. Some of the music is surprisingly light and playful while others convey the deep dark urgency that the moments require. Eighteen well-crafted tracks here including "Return of the Piranhas," "Kiss of Life," "Bathtub Dream," and "Battle For the Water Park." Prometheus - Original Motion Picture Soundtrack: Music by Marc Streitenfeld (CD, Sony Classical, Soundtrack) The soundtrack to the latest film from director Ridley Scott. This is a remarkable disc full of incredibly effective compositions from musical wonderkid extraordinaire Marc Streitenfeld. Regardless of whether you see the film or not, if you love big orchestrated sound...you will get some major thrills out of the Prometheus soundtrack. This CD features 25 instrumentals executed to absolute perfection. The disc clocks in at just over 57 minutes and covers a wide range of emotions. The music ranges from slow and methodical...to creepy and surreal...to wildly intense and out-of-control...and back again. Streitenfeld has managed to come up with music that is ultimately exciting and pushes multiple emotional buttons all at once. If you think all film soundtracks sound alike, think again. This one's a real standout...and parts of it may just scare the crap out of you. Strangely intimidating cuts include "A Planet," "Not Human," "Hyper Sleep," and "Planting The Seed." We hear a lot of soundtracks lately...this is one of the best. Top pick. Marty Regan - Magic Mirror (CD, Navona, Instrumental/Japanese) This album features compositions by American composer Marty Regan. Judging by the tracks on this album, you'd never guess Regan is from the United States. Marty composes music specifically written for Japanese instruments and...according to the press release he "...explores the cross-cultural exchange between Eastern and Western traditions, blending the two into a dynamic sound that pushes the Japanese instruments to the very brink of their musical boundaries." Well we couldn't have said it better ourselves (!). These intricate well-crafted pieces effectively bridge the gap between the two cultures. Performing on the album are Seizan Sakata, Tetsuya Noazawa, Erina Matsumara, Izumi Fujikawa, Kenji Yamaguchi, Nobuhiro Wakatsuki, Ray Jin, Hitomi Nakamura, Kasue Tajima, Maya Sakai, Yuka Sawade, Shozan Tanabe, Gen Takeuchi, Etsuko Hirano, Saeko Wakiya, Masabumi Sekiguchi, and Akiko Sakura (whew!). Six beautifully executed lengthy tracks...presented in an ultra-cool bilingual package. Beautiful, haunting, tranquil, mesmerizing... TOP PICK. Sean Renner - Sekhmet (Independently released CD, Instrumental) An intriguing collection of well-crafted compositions from St. Louis, Missouri's Sean Renner.. Tommy Roe - Devil's Soul Pile (CD, Airebelle, Pop) We were sure as heck surprised to find this one in our trusty ol' mailbox...(!). A new album from 1960s bubblegum icon Tommy Roe...we would have never guessed or expected it. It's great to hear Roe again. If you're expecting the Tommy from decades past, think again...because as we would and should expect...this fellow has matured and evolved. The smooth reflective mood of the tracks on Devil's Soul Pile are a great fit for Roe at this point in time. His voice sounds as great as ever and he still knows how to write crowd pleasing tunes. Not only has he recorded and released this album, but Tommy is also playing a new series of concerts. His current band is led by well-known guitarist/music director Rick Levy...so you know these will be concerts where quality is key. Nine soft smart cuts here including "Memphis Me," "What If's And Should Have's," "Without Her," and "Devil's Soul Pile." Romper - Sifting Through The Rubble (CD, Rompytown, Pop) We were immediately drawn to this band and album...the image and overall concept caught our attention fast. Apocalyptic pop with various verbal and visual references to kids? Hmmm...interesting. Of course the name Romper is a huge plus as well. To try and describe the basic sound of Sifting Through The Rubble... Imagine mixing some elements from My Dad Is Dead with other elements from The Velvet Underground...then mix them around and add some sedatives...and you might begin to get an idea of what's going on here. This is a true underground album created first and foremost from inspiration. The man behind the music is a fellow in Pacifica, California named Paul Freeman who is also a music journalist and screenwriter. This man's moody slightly obtuse pop will be embraced by fans of the underground...while probably confusing for folks who exist on a lower level of consciousness. There's a lot to take in here...eighteen tracks that clock in at just over 60 minutes. We can't help but dig peculiar songs like "Road To Ruin," "One of the Wanted," "The Neighborhood," and "Contemplating Suicide." Interesting stuff that offers a uniquely different perspective... Ruperts People - 45 RPM (CD, Angel Air, Pop) We receive some of the coolest obscure oddities from the folks at Great Britain's Angel Air label. We've always been nuts about British bands from the 1960s. It was definitely one of the golden decades for the country with so much creativity going on that it naturally bled all over to the rest of the world. Although they were known around their home territories, it's probably a safe bet to say that the guys in Ruperts People didn't enjoy the same success that many other bands did at the time. The band's story is long and winding...and is well detailed in the twelve page booklet that accompanies this CD. The folks in this band were always operating on the fringes...but they had ties to some really amazing artists and companies (too numerous to mention here). Suffice to say, Ruperts People have probably been best known and loved by other folks in the industry. The bulk of this album consists of singles and live tracks from the 1960s although the last six are from the 1970s (the band was then known as Matchbox and/or Swampfox). Includes "All So Long Ago" by an early version of the band called The Sweet Feeling and other cool tracks like "I Can Show You," "Charles Brown," and "Reflections of Charles Brown." Recommended for fans of The Move, The Kinks, The Who, and The Rutles. Saint Saviour - Union (Advance CD, Surface Area, Progressive pop) Wow. This album is a good example of why we continue to review music. Although we try to avoid being what we are (which is...underground music snobs)--we have to admit that we get a kick out of hearing incredible up-and-coming stuff like this before everyone else. But onto the music at hand... This is the debut full-length release from Great Britain's Saint Saviour, the one woman band created by an incredibly gifted young musician named Becky Jones. Union is going to instantly catch on with music fans around the world as Ms. Jones' supporters quickly line up to take in her wonderfully warm and absorbing modern progressive pop. These songs incorporate sounds and elements from a wide range of sources...but some of her ideas are immediately reminiscent of classic artists like Kate Bush, The Cocteau Twins, and St. Vincent. What is perhaps most interesting about these songs...is how they incorporate ideas from the past into music of the present. Saviour has an incredibly expressive voice that will give you chills. Beautifully executed compositions include "Mercy," "Tightrope," "Reasons," "Jennifer," "Fallen Trees," and "Horse." This is an album that will be played and discussed for decades to come. TOP PICK. Jay Shepard - Harsh Mistress (CD, 825, Pop) Heavily produced modern pop featuring smart melodies and kickass guitars. This is the debut full-length release from Jay Shepard. After spinning this one a few times, we can say with certainly that this young man is on a quick path to success. Instead of taking the roundabout artsy approach to music, Shepard offers instantly accessible songs that could easily be appreciated by millions of fans. His songs incorporate elements from classic artists from the past...which are then infused with heady state-of-the-art twenty-first century technology. The result is a smart and infectious batch of tunes featuring thick overdubs and spacey effects. The vocals are the real treat here. Jay's got an acutely fine-tuned voice that works perfectly within the framework of his compositions. And the harmonies are out-of-this-world. No matter whether you like the ultra-artsy stuff or that same old familiar hit sound...you're bound to fine something here to love. Our favorite cuts include "Come Back Home," "Truth," "The City," and "Harsh Mistress." Shoes - Ignition (CD, Black Vinyl, Pop) When we opened up the package and saw this album inside we couldn't believe our eyes... A new album...from Shoes...??!!! We had pretty much come to the conclusion that these cool guys had permanently thrown in the towel because of the world's general hesitation to embrace their heavenly pop music last century. We've always felt that these guys created the cream of the crop in the world of 1970s and 1980s pop music. As such, we were always baffled as to why they didn't attract a larger fan base...? It's probably just a matter of timing and marketing...plus the fact that to truly appreciate Shoes tunes you must hear them at least a dozen times or more before the real meat sinks in. So it's been eighteen years since the last album was released...what has changed? Not a whole lot...thankfully! Gary Klebe, John Murphy, and Jeff Murphy are still writing crystal clear guitar-driven pop tunes and their vocals still sound as mind blowing as they ever did. We were pleased as punch to find that these guys are now keeping their recordings relatively simple and straightforward...since that's what initially drew us to their music in the first place (if you have never heard the beautifully crafted home-recorded album Black Vinyl Shoes do yourself a favor and pick up a copy--it's never too late...). The tunes on the new album rely heavily on those cool trademark Shoes guitars...very reminiscent of some of the band's earlier albums. We were in love with Shoes in the 1970s...and now the love affair continues. Fifteen absolute direct hits here. Our favorite cuts include "Head vs. Heart," "The Joke's On You," "Maybe Now," "Say It Like You Mean It," and "Only We Remain." We can only hope this is the beginning of a brand new series of releases from this criminally overlooked band. Shoes are one of the greatest pop bands of all time. TOP PICK. Silversun Pickups - Neck of the Woods (CD, Dangerbird, Rock/pop) This is Silversun Pickups' third full-length release on the Dangerbird label. These guys really hit the target dead center with this one...it's no wonder their fan base seems to grow exponentially with every passing month. Neck of the Woods is one big charged up intense jolt of modern progressive pop with a difference. These tracks are heavily produced and feature thick arrangements and plenty of heady effects. But the vocal melodies always remain the central focus of the proceedings. The band is comprised of Brian Aubert (guitars, vocals), Nikki Monninger (bass), Joe Lester (keyboards), and Christopher Guanlao (drums). Considering how intelligent this band's songs are, we're actually surprised that so many people like their music [smart stuff usually seems to alienate most listeners]. There's no telling how many countless hours were spent crafting the tunes on Woods. The attention to detail is staggering. What is certain is that this is another one the fans are going to love. Eleven captivating cuts here including "Skin Graph," "Busy Bees," "Simmer," and "Out of Breath." Top pick. The Small Cities - With Fire (CD, Princess, Pop) The debut full-length release from the Twin Cities-based band The Small Cities. These guys previously released an EP that was well received...but after reading the press release our guess is that this is the disc that'll increase their fan base exponentially. With Fire is a nice solid collection of modern pop/rock tracks with a heavy emphasis on melodies and lyrics. These guys tread on the fine line that separates commercial pop from underground rock. Their songs are easy to grasp and appreciate...but they're a far cry from the predictable modern crap that the average twenty-first century listener craves. Some of the songs are more pop oriented while others pack quite a punch. The band has an excellent charged up rhythm section and their loud overdriven fuzzy guitars sound just great. Add in cool focused vocals...and you have a band that is shooting fully loaded. Our favorite cuts include "Home Is Where The Start Is," "Last Winter," and "Sunday After Sunday." Good solid stuff with guts. Sonnet Cottage - Another Time (Independently released CD, Soft dreamy pop) Pensive, soft, melodic music from Sonnet Cottage. This is the debut full-length release from this Virginia-based trio that is comprised of sisters Rachel Russell and Torey Russell and creative mastermind Kent Heckaman. These beautifully crafted tracks feature dreamy melodies, ultra-soothing vocals, and an overall somber relaxing vibe. To give you an idea of what this sounds like... Imagine the girls in Azure Ray playing modern folk pop...and that should give you a fairly good idea of what's happening here. This is an extremely consistent disc. Clocking in at just over 36 minutes, there's not a bad track to be found here...they all work exceptionally well. These folks have done everything right setting the stage for what will undoubtedly be a long and rewarding career. Our favorite cuts include "Letting It All Go," "Another Time," "Little Did I Know," and "The Caretaker." The Soundtrack Of Our Lives - Throw It To The Universe (CD, Yep Roc, Pop) Several years ago we heard a few tracks by this band and really enjoyed 'em...but we never really got a proper dose of their music until now. This, the group's seventh full-length release, will most likely be their last. Kinda sad...particularly so when we realize how quickly we warmed up to the tunes on Throw It To The Universe. These Swedish guys have accomplished a lot over the past couple of decades. In addition to releasing critically acclaimed albums, they've also toured the globe several times, and ended up on a great many "best of" lists along the way. The band members obviously wanted to end things on a high note here. This album features nice smooth melodic tracks with cool effective arrangements and plenty of layered harmonies. This may be too slick and polished for the true underground artsy snobs out there. But to our ears these heavily produced cuts sound very exacting and refined. Thirteen well-crafted tracks including "Throw It To The Universe," "Where's The Rock?", "Busy Land," and "Shine On (There's Another Day After Tomorrow)." Well-realized and focused... Spider Rockets - Bitten (CD, P-Dog, Pop) Loud ballsy pop/rock played with conviction and attitude. Based in Hazlet, New Jersey, the folks in Spider Rockets are playing raunchy in-your-face pop/rock that features totally groovy fuzzed out guitars, charging rhythms, and a vocalist with real presence. The band is comprised of Helena Cos (vocals), Johnny Nap (guitars), Dan Prestup (drums), and Timmy Tobin (bass). These folks are treading in a world where heavy metal meets catchy pop. The songs on Bitten are instantly inviting and familiar...but most of the time they have a truly nasty bite (thus the name...?). These songs have an explosive exciting sound that sets them apart from other bands. These folks play like they mean it and they obviously mean what they play. Ten well-crafted cuts here including "Going Down," "Scream," "Better When It's Loud," and "Bring Me Around." Clinton St. John - Storied Hearts and the Three Assimilations (Independently released CD + book, Progressive) The sixth full-length release from Calgary's Clinton St. John. We have not heard prior releases so we cannot compare and contrast here...only offer observations of what we're hearing at present. St. John was formerly in the bands The Cape May and Pale Air and also played in Nina Nastasia's band. Storied Hearts and the Three Assimilations is an instantly intriguing collection of songs. Clinton composes music that incorporates ideas from folk, pop, progressive, and alternative rock...all the while retaining a unique original edge. This album has an overall laidback feel...but things never get boring. Listening to this, we can't help but wonder who this man is influenced by...(?). There are so many hints of so many other artists that it becomes impossible to come up with any easy reference points. The main constant...are those smart articulate lyrics and cool subdued vocals. Interesting arrangements abound, as the songs don't take the traditional twists and turns. As if the interesting tunes weren't enough, this independent release is attached to a cool 7" lyric booklet complete with nifty surreal drawings. Beautiful, unusual, and hypnotic music that comes from the heart... Top pick. Newton D. Strandberg - Essays and Sketches (CD, Ravello, Classical) If you want to find modern classical composers...you will usually find them teaching in colleges and universities across the country. This is probably because it is usually so difficult to make a living in the twenty-first century composing music. This album features unreleased recordings by longtime Sam Houston State University faculty member Newton D. Strandberg. These recordings were compiled by Strandberg's peers, students, and admirers. As such, it offers an overview of this talented and under recognized man's skills as a composer. The CD is divided into four sections: "Essay For String Orchestra," "Amenhotep III," "String Trio" (divided into four sections), and "Acts For Orchestra." This does not sound like an album of unreleased recordings. The recording quality is slick and polished...and the pieces are by no means throwaways or second-rate discards. These intricate compositions are smooth, heavenly, and exciting...and they should appeal to just about anyone who has enjoys cool credible classical music... Worsel Strauss - Unattention Economy (German import CD, Vicmod, Instrumental/experimental/progressive) The debut solo album from Germany's Worsel Strauss. Up to this point in time Strauss is possibly best known as one half of the band Schleusolz, along with his musical mate Schani Wolf. The idea for Unattention Economy is intriguing to say the least. Instead of playing and programming, Strauss let's computers and machines generate the music...and then he edits and mixes the results into finished compositions. As such, the tracks on Economy are much more listenable than you might expect. Some of these pieces sound very much like the soundtrack to a space flick you've never seen. Others have an industrial feel while some are just pure exercises in experimentation. Some pretty wild stuff here...and each track is not a carbon copy of the last. Trippy heady cuts include "Behind Closed Doors," "I Missed My Boat Because Of You," "Shopping For Antibiotics," and "Discovering The Truth." Unique and inventive stuff that's well worth your time and attention. TEETH Dig teeth in And tear away At warm moist things For pleasure. Billy Tsounis - Music For Your Vacation (Independently released CD-R, Instrumental/improvisation/experimental) We have to admire folks who create music out of a pure desire to create. And that is obviously the case with Billy Tsounis. This fellow recorded and self-released this album most likely knowing full well of its limited audience potential. The appropriately titled Music For Your Vacation is an exercise in unbridled creativity. Tsounis doesn't follow any conventional forms or styles, he simply turns things on and expresses himself. Billy is influenced by free jazz, improvisation, ambient, lo-fi, psychedelia, noise, and drone...and we can hear traces of all of these in his music. This disc is a truly peculiar spin...and there's a lot to take in. Sixteen compositions that clock in at just under 80 minutes. Perplexing tracks include "Barnyard Zombie Code," "Tub Dub," "Twirl," and "Fingerpick Space Dirt" (gotta love those song titles...!). Interesting heady stuff. TWINKLE DAMMIT Twinkle, twinkle Little star, Dammit, dammit, dammit, Dammit. Unknown Component - Blood v. Electricity (Independently released CD, Pop) Unknown Component is the Ames, Iowa-based artist named Keith Lynch. If you go to the web site you won't find much biographical information or personal facts. But if you glimpse at Lynch's social networking sites, you will find that this guy's music has already caught on with lots of folks. And if you listen to Unknown Component, you'll see why. It's amazing that this fellow is able to write and record stuff that sounds this professional on his own without the help or assistance of anyone else. Just goes to show you how far technology has advanced the recording process in the twenty-first century. The tunes on Blood v. Electricity are moody and dark and they have a big thick produced sound. Keith's voice is the focal point of the music...but unlike many vocalists he never sounds like he's forcing himself or pushing things too hard. Beautiful artwork graces the front and back cover of the cool digipak sleeve. Eleven finely-tuned tracks here including "Intuition," "Pendulum," "Moral Vultures," and "The Invisible Line." Solid. USED/ARE Things are Never like they Used to be because Things are always Like they Are. WAY Everything is Boring In its own Way Additional Items Received: A Abraham Lincoln Vampire Hunter - Original motion picture soundtrack Abundance - Manner effect Jason Adamo Band - Bricks & mortar A Dangerous Method - Original motion picture soundtrackime Dashboard Madonna - Neon life Dashboard Madonna - Neon life EP Elevator Art - Tent city Elika - Always the light Annalise Emerick - Starry-eyed Empire Escorts - Empire Escorts Kevn Kinney - A good country mile Jon Lindsay - Summer wilderness program Memorials - Delirium Red Lights - Original motion picture soundtrack Red Jasper - Sting in the tale Red Moon Road - Red Moon Road Red Wanting Blue - From the vanishing point Kate Reid - The love I'm in Suit of Lights - Shine on forever Sunspot - The slingshot effect Dan Susnara - Prison sanctuary open field...prison Sweet Interference - The falling in and out T Zeiton - Form babysue * LMNOP * dONW7 Missing Dog Head ©2012 LMNOP aka dONW7
https://www.babysue.com/2012-Aug-LMNOP-Reviews.html
CC-MAIN-2019-04
refinedweb
12,282
65.62
Syntax::Feature::Io - Provides IO::All version 0.001 use syntax qw( io ); my @own_lines = io(__FILE__)->getlines; This is a syntax feature extension for syntax providing IO::All. Not much additional use is provided, besides much easier access if you are already using syntax in one way or another. $class->install( %arguments ) Used by syntax to install the extension in the requesting namespace. Only the arguments into and options are recognized. use syntax io => { -import => [-strict] }; You can use this option to pass import flags to IO::All. Since this is the option you'll most likely use, if any, you can skip the hash reference and provide the import list directly if you wish: use syntax io => [-strict]; Please see "USAGE" in IO::All for documentation on the import arguments. use syntax io => { -as => 'IO' }; my @own_lines = IO(__FILE)->getlines; Set the name of the import. Please report any bugs or feature requests to [email protected] or through the web interface at: Robert 'phaylon' Sedlacek <[email protected]> This software is copyright (c) 2011 by Robert 'phaylon' Sedlacek. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
http://search.cpan.org/~phaylon/Syntax-Feature-Io-0.001/lib/Syntax/Feature/Io.pm
CC-MAIN-2017-39
refinedweb
209
64.1
Summary: Microsoft Scripting Guy Ed Wilson shows how to use .NET Framework commands inside Windows PowerShell Hey, Scripting Guy! I understand that Windows PowerShell is built upon the .NET Framework. I also get the idea that some of the Windows PowerShell cmdlets are simply calling things from the .NET Framework in the background. But what good is that to me? Why do I care? -- HB Hello HB, Microsoft Scripting Guy Ed Wilson here. Yesterday we looked at the fundamental concepts of the .NET Framework, and explored the System.Diagnostics.Process .NET Framework class. By using the static GetProcesses method, a listing of all the processes and their top properties is obtained. When you call a .NET Framework class, the System part of the namespace can be left off as it is assumed. Therefore, with the System.Diagnostics.Process class, System.Diagnostics is the namespace, and Process is the actual class name. When I leave the System portion off the namespace name the command morphs to the one shown here. [diagnostics.process]::GetProcesses() The output from this command is seen in the figure below. The output shown in this figure is very extensive and provides a useful overview of the way the system is performing. It is so useful the Windows PowerShell team has incorporated it into the Get-Process cmdlet. To use the Get-Process cmdlet you just have to type the command on Windows PowerShell console command line as seen here. Get-Process The output from the Get-Process command is shown in the following figure. The output from the Get-Process cmdlet and the [diagnostics.process]::GetProcesses() are equivalent. The difference is that Get-Process alphabetizes the output by process name and [diagnostics.process]::GetProcesses() does not. The point of this exercise is to show that Windows PowerShell is a .NET Framework application. You do not have to type [diagnostics.process]::GetProcesses() because that is the essence of Get-Process. In fact, if I pipeline Get-Process to the Get-Member cmdlet, I can see that the System.Diagnostics.Process class is returned by the command. The command to do this is shown here. Get-Process | Get-Member As we saw yesterday, there are an awful lot of methods and properties exposed by System.Diagnostics.Process. The output is shown in the figure below. Using the .NET Framework Process class, from the System.Diagnostics namespace, I can use the static method GetProcessByName. If I am not certain how to use the method, I can pipeline the class to Get-Member, specify I want to see the static method getProcessbyname and output the returned information to the format-list cmdlet. When I do this, the output seen here appears. PS C:\> [diagnostics.process] | gm -s getProcessesbyname | fl * TypeName : System.Diagnostics.Process Name : GetProcessesByName MemberType : Method Definition : static System.Diagnostics.Process[] GetProcessesByName(string processName), static Sys tem.Diagnostics.Process[] GetProcessesByName(string processName, string machineName) PS C:\> The output seen earlier tells me that I have to provide either a process name or a process name and a computer name to the method. By using that information, I can return information about the explorer process. This is seen here. PS C:\> [diagnostics.process]::GetProcessesByName("explorer") Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName ------- ------ ----- ----- ----- ------ -- ----------- 1084 80 99332 120468 447 13.90 3620 explorer PS C:\> We see the same subset of information that was returned by the Get-Process Windows PowerShell cmdlet. We know now, that additional information would be available ... the information such as seen in the previous figure. Remember back when we were looking at the member information for the GetProcessesByName method? There was an alternative way of calling the method that would accept both a process name and a computer name as parameters. If I wanted to use the process .NET Framework class to retrieve process information from a remote computer back in the Windows PowerShell days, this was the way to do it. (Of course, there was also the Win32_Process WMI class that was available also). To use the process class and specify a computer name, use the syntax seen here (The output is shown under the command). PS C:\> [diagnostics.process]::GetProcessesByName("explorer", "localhost") Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName ------- ------ ----- ----- ----- ------ -- ----------- 1488 96 141128 153208 589 3620 explorer In Windows PowerShell 2.0 the Get-Process cmdlet was upgraded to accept a computername parameter. This command and associated output is shown here. PS C:\> Get-Process -Name explorer -ComputerName localhost Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName ------- ------ ----- ----- ----- ------ -- ----------- 1478 95 140644 152968 587 3620 explorer PS C:\> By comparing the output from the two previous commands, you can see that the output is virtually the same. This shows that the .NET Framework class GetProcessByName method is behind the name parameter that is used by the Get-Process cmdlet. Not all the methods and properties exposed through the System.Diagnostics.Process .NET Framework class are exposed through a switch or a parameter of the Get-Process Windows PowerShell cmdlet. This is where knowing a bit about the underlying .NET Framework classes comes into play. For example, if I want to change the processor affinity of a particular process that is running on a computer, all I have to do is to use Get-Process to retrieve the process, and then assign a new value to the ProcessorAffinity property. This is illustrated here where I first launch a new instance of notepad. I then use Get-Process to retrieve the notepad process to make sure it is working correctly. After confirming that it works, I again use the Get-Process Windows PowerShell cmdlet to retrieve the notepad process, and I store it in the $a variable. I then query the ProcessorAffinity property and see that the notepad process has a processor affinity of 3. PS C:\> notepad PS C:\> Get-Process notepad Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName ------- ------ ----- ----- ----- ------ -- ----------- 67 7 1524 5416 71 0.03 6236 notepad PS C:\> $a = Get-Process notepad PS C:\> $a.ProcessorAffinity 3 PS C:\> $a.ProcessorAffinity = 0x0002 PS C:\> $a.ProcessorAffinity 2 HB, that is all there is to using the .NET Framework Process class. Exploring .NET Framework week will continue tomorrow when we will talk about how .Net Framework Tutorial vb.net-informations.com/…/framework_tutorials.htm Dell
https://blogs.technet.microsoft.com/heyscriptingguy/2010/10/26/learn-how-to-use-net-framework-commands-inside-windows-powershell/
CC-MAIN-2019-04
refinedweb
1,068
59.4
IronPython works with WPF and XAML but it is still underdevelopment so there are parts that are supposed to work but don't. To see this aspect of IronPython in action the best thing to do is to start a new WPF application project. As long as you are using IronPython Tools you will see the familiar drag-and-drop designer and the XAML editor. The drag-and-drop designer is supposed to work but in the version of the Tools used for this article it didn't. However it was possible to enter XAML to create controls and then edit these in the designer by dragging and sizing. To get started enter the following XAML into the editor: <Grid x: <Button Name="Button1" Margin="20,10,150,215"> "Hello World" </Button> <TextBox Name="Textbox1" Margin="20,60,60,170"> </TextBox> </Grid> At this point you might be tempted to try to work with names such as Button1 and TextBox1 but at the moment this doesn't work. IronPython works with uncompiled XAML and as such bindings between names in XAML and names in IronPython aren't made and so you can't simply use XAML assigned names in a IronPython program. It is possible to write some code that adds the names but this would take us too deep into the interaction between IronPython and other .NET languages and facilities. The XAML is converted into a WPF object graph by the use of XamlReader: window=XamlReader.Load( FileStream('WpfApplication5.xaml', FileMode.Open)) XamlReader returns an object which is the root of the object graph i.e. a Window object in the case of the XAML code given above. The controls are dynamically created as child nodes within the tree: tb=LogicalTreeHelper.FindLogicalNode( window,"Textbox1") returns a reference to a child control of the window with the unique name 'Textbox1'. Once you have the reference to the control you can work with it as normal. For example: tb.Text="hello" The only problem with this approach is that you don't get any help from IntelliSense prompting for properties of the WPF controls. As a more complete example consider the task of setting the Textbox's caption to "click" when the button is clicked. First we need to import all of the .NET support we need: import clrclr.AddReference('PresentationFramework')from System.Windows.Markup import XamlReaderfrom System.Windows import *from System.IO import FileStream, FileMode def OnClick(source,event): tb.Text="Clicked" Notice that you don't have to type the parameters - as long as you have the correct number for the event handler then everything just works. Next we load the XAML and customise the controls: window=XamlReader.Load( FileStream('WpfApplication5.xaml', FileMode.Open))tb=LogicalTreeHelper.FindLogicalNode( window,"Textbox1")btn=LogicalTreeHelper.FindLogicalNode( window,"Button1")btn.Click+=OnClick Finally we set the .NET application running and this is done in exactly the same way as in most .NET projects - only in the case of C# and VB it is hidden from the programmer within generated code: app = Application()app.Run(window) If you run the whole program you will be able to click the button and see the text in the Textbox and its integration with XAML isn't perfect. Better integration with Visual Studio or any IDE for that matter would make the whole thing easier to use and progress in this area has been slow and hampered by the need to keep up with .NET technology as it evolves. It is worth knowing that there is a Silverlight project option within the Visual Studio Tools but at the moment creating Silverlight applications is just as ad-hoc as for WPF or Windows Forms. IronPython also suffers from the usual problem associated with most open source projects, the documentation is terrible.There is plenty of documentation telling you about advanced features and even very simple features. What tends to be missing are the crucial pieces of information that you need to get started, such as how to wire up WPF event handlers, how to get at controls generated by Xaml and so on. But even after all of these difficulties are taken into account it's an interesting example of a dynamic language running on the .NET platform - which after all was not designed for dynamic languages. <ASIN:B002EAZX7E@ALL> <ASIN:0672329786> <ASIN:1593271921> <ASIN:0596100469> <ASIN:1435455002> <ASIN:0596007973>
http://www.i-programmer.info/programming/other-languages/1043-ironpython.html?start=4
CC-MAIN-2015-14
refinedweb
731
53.92
Eric van Gyzen wrote this message on Mon, Jun 23, 2014 at 08:57 -0500: > On 06/23/2014 08:44, John-Mark Gurney wrote: > > So, when I try to eject a ESATA card, the machine panics... I am able > > to successfully eject other cards, an ethernet (re) and a serial card > > (uart), and both handle the removal of their device w/o issue and with > > out crashes... > > > > When I try w/ ahci, I get a panic... The panic backtrace is: > > #8 0xffffffff80ced4e2 in calltrap () at > > ../../../amd64/amd64/exception.S:231 > > #9 0xffffffff8093d037 in rman_get_rid (r=0xfffff800064c9380) > > at ../../../kern/subr_rman.c:979 > > #10 0xffffffff8092b888 in resource_list_release_active > > (rl=0xfffff80006d39c08, > > bus=0xfffff80002cd9000, child=0xfffff80006b6d700, type=3) > > at ../../../kern/subr_bus.c:3419 > > #11 0xffffffff8065d7a1 in pci_child_detached (dev=0xfffff80002cd9000, > > child=0xfffff80006b6d700) at ../../../dev/pci/pci.c:4133 > > ---Type <return> to continue, or q <return> to quit--- > > #12 0xffffffff80929708 in device_detach (dev=0xfffff80006b6d700) > > at bus_if.h:181 > > #13 0xffffffff8065f9f7 in pci_delete_child (dev=0xfffff80002cd9000, > > child=0xfffff80006b6d700) at ../../../dev/pci/pci.c:4710 > > > > In frame 9: > > (kgdb) fr 9 > > #9 0xffffffff8093d037 in rman_get_rid (r=0xfffff800064c9380) > > at ../../../kern/subr_rman.c:979 > > 979 return (r->__r_i->r_rid); > > (kgdb) print r > > $1 = (struct resource *) 0xfffff800064c9380 > > (kgdb) print/x *r > > $4 = {__r_i = 0xdeadc0dedeadc0de, r_bustag = 0xdeadc0dedeadc0de, > > r_bushandle = 0xdeadc0dedeadc0de} > > > > So, looks like something is corrupted the resource data... > > The resource data has been freed. Advertising Well, that is a type of corruption.. :) If we free it, why wasn't it removed from the list? or properly NULL'd out? > > Attach dmesg: > > atapci0: <JMicron JMB363 UDMA133 controller> at device 0.0 on pci2 > > ahci1: <JMicron JMB363 AHCI SATA controller> at channel -1 on atapci0 > > ahci1: AHCI v1.00 with 2 3Gbps ports, Port Multiplier supported > > ahci1: quirks=0x1<NOFORCE> > > ahcich6: <AHCI channel> at channel 0 on ahci1 > > ahcich7: <AHCI channel> at channel 1 on ahci1 > > ata2: <ATA channel> at channel 0 on atapci0 > > [eject card] > > ahcich6: stopping AHCI engine failed > > ahcich6: stopping AHCI FR engine failed > > ahcich6: detached > > ahcich7: stopping AHCI engine failed > > ahcich7: stopping AHCI FR engine failed > > ahcich7: detached > > ahci1: detached > > ata2: detached > > atapci0: detached > > > > > > Fatal trap 9: general protection fault while in kernel mode > > > > Also, has anyone thought about adding a case in your trap > > handler that when we hit the deadc0de address, to print up a > > special message or something? At least flag it, or do we not get > > the faulting address? > > > > This is HEAD as of r266429. > > > > Let me know if there is anything else you need to know. > > The full stack trace might be useful. I could give it to you, but it contains code I can't release (at least not yet)... It's basicly an interrupt that calls pci_delete_child, so there isn't anymore useful information there.. I'm just puzzled why uart and re don't have this same problem.. --"
https://www.mail-archive.com/[email protected]/msg155507.html
CC-MAIN-2018-13
refinedweb
465
63.29
Provided by: manpages-dev_5.02-1_all NAME mount - mount filesystem SYNOPSIS #include <sys/mount.h> int mount(const char *source, const char *target, const char *filesystemtype, unsigned long mountflags, const void *data); DESCRIPTION fsync(2), syncfs(2), or and a kernel configured with the CONFIG_MANDATORY_FILE_LOCKING option.,. * Since Linux 2.6.16: MS_NOATIME and MS_NODIRATIME. * Since Linux 2.6.20: MS_RELATIME. The following flags are per-superblock: MS_DIRSYNC, MS_LAZYTIME, MS_MANDLOCK, MS (other than MS_REC, described below) in the mountflags argument are also ignored. other flags that can be specified while changing the propagation type are MS_REC (described below) and MS_SILENT (which is ignored). unbindable. The file system point that has propagation type MS_SHARED. EINVAL A move operation (MS_MOVE) was attempted, but the parent mount of source mount has propagation type MS_SHARED.... EROFS Mounting a read-only filesystem was attempted without giving the MS_RDONLY flag. See EACCES, above.. The /proc/[pid]/mountinfo file exposes even more information about mount points, including the propagation type and mount ID information that makes it possible to discover the parental relationship between mount points. See proc(5) and mount_namespaces(7) for details of these files. SEE ALSO mountpoint(1), chroot(2), ioctl_iflags(2), pivot_root(2), umount(2), mount_namespaces(7), path_resolution(7), findmnt(8), lsblk(8), mount(8), umount(8) COLOPHON This page is part of release 5.02 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.ubuntu.com/manpages/eoan/man2/mount.2.html
CC-MAIN-2021-43
refinedweb
248
50.02
XML documents can have a reference to a DTD or to an XML Schema. A Simple XML Document Look at this simple XML document called "E-mail.xml": The following example is a XML Schema file called "E-mail.xsd" that defines the elements of the XML document above ("E-mail.xml"): We will discuss the building blocks of this schema latter in this section further. Add a reference to the above declared XML document Now this XML document (E-mail.xml) has a reference to above declared XML Schema(E-mail.xsd) Save E-mail.xml and E-mail.xsd in the same location. Open the file E-mail.xml in a web-browser. You will see the following : Let's briefly discuss the concept of XML Namespaces XML Namespaces provide a mechanism to avoid element's name conflicts. Name Conflicts: Since element names in XML are not predefined, chances for frequency to meet name conflict increases when two different documents use the same element names. We solve the Name Conflicts using a Prefix with a element name: By using a prefix, we can create two different types of elements. Instead of using only prefixes, we add an xmlns attribute to the conflict causing tags to give the prefix a qualified name . The XML Namespace (xmlns) Attribute: The XML namespace attribute is placed in the start tag of an element and has the following syntax: Example 1(taken from E-mail.xml ) : Example 2(taken from E-mail.xsd ) : When a namespace is defined in the start tag of an element, all child elements with the same prefix are associated with the same namespace. In E-mail.xsd "xs" is the defined namespace in the start tag. So it prefixes all the child elements with xs eg.... Here a Uniform Resource Identifier (URI) is a string of characters which identifies an Internet Resource. Default Namespaces : Defining a default namespace for an element saves us from using prefixes in all the child elements. It has the following syntax: We have not included prefixes in all the child element tags( To, From, Subject, Body) in our following example : Building blocks of a XML-Schema The <schema> element is the root element of every XML Schema: The <schema> element may contain some attributes like... The following code: indicates that the elements and data types used in the schema come from the "" namespace. It also specifies that the elements and data types that come from the "" namespace should be prefixed with xs: This code segment indicates that the elements defined by this schema (E-mail, To, From, Subject, Body.) come from the "" namespace. This fragment: indicates that the default namespace is "". This fragment: indicates that any elements used by the XML instance document which were declared in this schema must be a namespace qualified. Referencing a Schema in an XML Document This XML document (E-mail.xml) has a reference to an XML Schema (E-mail.xsd).. The first value is the namespace to use. The second value is the location of the XML schema to use for that namespace:: Designing XML Schema View All Comments Post your Comment
http://www.roseindia.net/xml/xml_schema_example.shtml
CC-MAIN-2016-44
refinedweb
526
64.81
I was going through the basics of Python, and testing out some built-in functions in the interpreter. The documentation I was looking at was talking about Python 3... I am using Python 2.7.3. >>>>> x '32456' >>> isalpha(x) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'isalpha' is not defined isalpha() sin(3.3) import math isalpha() is not a function but a method of the str type. If you have to, you can extract it as an unbound method and give it a name as a function: >>> "hello".isalpha() True >>> "31337".isalpha() False >>> isalpha = str.isalpha >>> isalpha("hello") True >>> isalpha("31337") False Functions in an imported module are members of that module. To pull a function into the main namespace, use the from statement: >>> import math >>> math.sin(3.3) -0.1577456941432482 >>> from math import cos >>> cos(3.3) -0.9874797699088649 Now why does Python work this way? Both the math module and the logging module have a function called log(), but they do very different things. >>> import math, logging >>> help(math.log) log(...) log(x[, base]) Return the logarithm of x to the given base. If the base not specified, returns the natural logarithm (base e) of x. >>> help(logging.log) log(level, msg, *args, **kwargs) Log 'msg % args' with the integer severity 'level' on the root logger. If the logger has no handlers, call basicConfig() to add a console handler with a pre-defined format. If all imported symbols went straight to the main namespace the way they do when you from math import *, a program wouldn't be able to use both modules' log() functions.
https://codedump.io/share/umTwr1u3DIgp/1/interpreter-python-built-in-functions-not-defined
CC-MAIN-2016-50
refinedweb
276
75
: The only fly in the ointment for libtool's dll support is handling the export of data items from a dll. Specifically, an object that will be linked into an executable which will in turn be linked with a handful of libraries (some static and some dll's, for argument's sake) needs to produce different code depending on what particular combinations of dll and static libraries it will ultimately be linked with. That is I need to make sure that any data item that will be imported from a dll has __attribute__((dllimport)), and any data item imported from a static library cannot have this attribute. Obviously, in a makefile driven build environment there is no easy way to know which combination of libraries this object will eventually be linked with... is there a way around this problem? Or do I have to come up with some kind of makefile scanner to figure out which attributes to attach to each symbol? Actually, libtool compounds the problem, because it wants to produce both static and dll libraries for each library object, and thus a given data symbol may be: 1) exported from this object if the object will be part of a dll 2) imported from another dll 3) imported from a static library 4) externed in the tradition way if it will be part of a static lib? My (flawed) best solution so far is: /* foo.h */ #ifdef __CYGWIN__ # ifdef _COMPILING_FOO_DLL_ # define EXTERN __declspec(dllexport) # else # define EXTERN extern __declspec(dllimport) # endif #else # define EXTERN extern #endif EXTERN int foo;
http://www.sourceware.org/ml/cygwin/1999-03/msg00641.html
CC-MAIN-2014-41
refinedweb
261
51.31
#include <linux/kcmp.h> int kcmp(pid_t pid1, pid_t pid2, int type, unsigned long idx1, unsigned long idx2); Note: There is no glibc wrapper for this system call; see NOTES. The type argument specifies which resource is to be compared in the two processes. It has one of the following values: Note the kcmp() is not protected against false positives which may occur if tasks are running. One should stop tasks. This system call is available only if the kernel was configured with CONFIG_CHECKPOINT_RESTORE. The main use of the system call is for the checkpoint/restore in user space (CRIU) feature. The alternative to this system call would have been to expose suitable process information via the proc(5) filesystem; this was deemed to be unsuitable for security reasons. See clone(2) for some background information on the shared resources referred to on this page.
https://www.commandlinux.com/man-page/man2/kcmp.2.html
CC-MAIN-2017-13
refinedweb
145
70.73
This fixes bug 35682. When a template in instantiated with an incomplete typo corrected type an assertion can trigger if the -ferror-limit is used to reduce the number of errors. The issue can be reproduced with the following code: #include <utility> using SetKeyType = String; std::pair<SetKeyType, int> v; and compiled with: clang -stdlib=libc++ -ferror-limit=1 -c bug.cc This requires the stdlib=libc++ option, without it the assertion does not trigger. Neither does it trigger when the -ferror-limit=1 is not used. The test case is based on this sample but no longer requires the -stdlib=libc++ option.
https://reviews.llvm.org/D64644
CC-MAIN-2019-35
refinedweb
103
55.95
WM in 2015: Woefully out of date, but I preserve this post for posterity. generic pandas data alignment is about 10-15x faster than the #rstats zoo package in initial tests. interesting #python— Wes McKinney (@wesmckinn) September 29, 2011 I: library(zoo): > mean(timings) [1] 1.1518 So, 1.15 seconds per iteration. There are a couple things to note here: - The zoo package pre-sorts the objects by the index/label. As you will see below this makes a big performance difference as you can write a faster algorithm for ordered data. - zoo returns an object whose index is the intersection of the indexes. I disagree with this design choice as I feel that it is discarding information. pandas returns the union (the "outer join", if you will) by default. Python benchmark Here's the code doing basically the same thing, except using objects that are not pre-sorted by label: from pandas import *: In [11]: timeit x + y 10 loops, best of 3: 110 ms per loop Now, if I first sort the objects by index, a more specialized algorithm will be used: In [12]: xs = x.sort_index() [12]: import la =)
https://wesmckinney.com/blog/the-pandas-escaped-the-zoo-pythons-pandas-vs-rs-zoo-benchmarks/
CC-MAIN-2021-31
refinedweb
193
64.41
For specific validity questions, you might want to run against Metro and see what it says. Although not perfectly reliable, similiar error messages (or lack thereof) to CXF's might help determine what is not allowed with WSDL in general compared to what just CXF doesn't like. Glen 2008-05-26 Benson Margulies wrote: > However, now we get the question of what me mean by 'validate' in our tool. > > Our validator rejects the wsdl we call hello_world_xml_bare.wsdl with > the below error, because one operation has a single input part of > element type 'x', and another operation has three inputs parts, the > first of which is also 'x'. > > Is this, in fact, unacceptable? Should I change the wsdl to make the > elements distinct? Or is the validator wrong? > > On Mon, May 26, 2008 at 4:39 PM, Benson Margulies <[email protected]> wrote: > > What I'm doing is reflecting the existing command line into maven. > > > > On Mon, May 26, 2008 at 3:54 PM, Glen Mazza <[email protected]> wrote: > >> I believe we have something like this already on the command-line > >> (-validate option[1]). Perhaps it would be better to build on this, so > >> command-line/Maven/Ant users can use it as well. > >> > >> Glen > >> > >> [1] > >> > >> 2008-05-26 Benson Margulies wrote: > >>> Watchers of checkins can see that I'm inventing a WSDL validation maven plugin. > >>> > >>> First catch: > >>> > >>> INFO: Resolve schema > >>> from baseURI: jar:file:/home/benson/.m2/repository/org/apache/cxf/cxf-common-schemas/2.1.1-SNAPSHOT/cxf-common-schemas-2.1.1-SNAPSHOT.jar!/schemas/wsdl/http.xsd, > >>> namespace: > >>> [INFO] ------------------------------------------------------------------------ > >>> [ERROR] BUILD ERROR > >>> [INFO] ------------------------------------------------------------------------ > >>> [INFO] Non unique body parts, operation [ greetMe ] and operation [ > >>> testTriPart ] have the same body block > >>> {}requestType > >> > >> > >
http://mail-archives.apache.org/mod_mbox/cxf-dev/200805.mbox/%3C1211836487.12298.4.camel@gmazza-desktop%3E
CC-MAIN-2018-26
refinedweb
286
56.35
"radmind". The annotated tag, radmind-1.14.0rc1 has been created at 3fe5b4c0f698ad28626296dd8215024ffcb9f332 (tag) tagging ce09d4c6f94f552b971c69affbae4bd62e8d385c (commit) tagged by Andrew Mortensen on Wed Jun 23 16:55:00 2010 -0400 - Log ----------------------------------------------------------------- Radmind 1.14.0 release candidate 1 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQEVAwUATCJ0sKQc2zv4dsHeAQJ9OAgApfbQfI5CCvYVOgNSj387wQpeCP52qTLJ /8G4K4HYiD51nHjqO0HsWJrAmqGk1oiIvPRQd3iuW8Pg8x9j6UXzUNo22CgGYDN9 4arI5/dLKl+8ExQDoHVC6EGIBlV2r4EztPrwkDu2prC+r1g8wNTVMpg+WaK9C304 ImjCRMXkqRW7j57vCaKd8z/ufOhB4X+iA38YiwUXUZ8QfGcpFR3C8Qr0Oycf8CcL U9X1p3k+ioRoqfR87OhMGaGrokqukKM9gQ45w+U9GzmqMw01xuQQTAlSwwz77lf4 IFjm3qUCDGMEscO6ddRUXbwIAUeF1RrMFDpAGszXV4xkDgMftZxEQw== =qeu+ -----END PGP SIGNATURE----- Andrew Mortensen (23): Do not track configure script. Exclude leftovers from autoconf and git when making dist. Fix empty prepath check in lapply and lcksum Quick fix for pam_conv struct compiler nagging. Accept 2845279: Updated rash manpage Fix: lcksum crashes when given a minus (-) line with only two fields. [Bug 2887658]: fsdiff prints multiple lines for changing character devices Fix: missing closing quotation mark in lcksum error message. [Patch 2877346] Add a copy mode to lmerge. [Patch 2524867] Add -p option for ra.sh (for port). [Patch 2899332] Fix and document -p option to ra.sh. Add --with-pam. Pull Wes's path repetition dectection patch from CVS. Accidentally placed dns_sd check in the PAM if-block. [Bug 1816150]: Can't replace dir with file. node_create sometimes takes a NULL transcript name. [Patch 2931438]: Change port back to standard on failure to connect. Fix: -r (use randfile) was being ignored. [Bug 2927309]: ktcheck cores with recursive command files Fix: check argument count when encountering a minus line. [Patch 2930172]: Add support for CRLs Updated radmind man page with CRL documentation. Create longest possible path first in mkdirs routine. admorten (124): Added #ifdefs around rsrc_path item of applefileinfo struct Placed #ifdef around applefile size calculation. Added sys/param.h for FreeBSD compatibility. Using string.h instead of strings.h Updated fsdiff manpage to include type 'a' entries. Added manpage for applefile. Updated to include information about applefiles. Minor edits to manpages. Using _PATH_RSRCFORKSPEC instead of hardcoded path ../namedfork/rsrc. Updated radmind.8 manpage to reflect support for raw IPs in config files. Added #include <stdio.h> to wildcard.c for linux compatibility. Fixed misspelling: 'atterlist' -> attrlist. Compiler wasn't catching errors Added #ifndefs in stub function for systems without ENOTSUP defined (e.g. OpenBSD) Changed errno in stub functions to more generalized value EINVAL. Added -i option to turn on line buffering when stdout is not a tty. Added -i option to turn on line buffering when not attached to a terminal. Added -i option to turn line buffering on when not attached to a terminal. Added -i to usage printout. Updated ktcheck man page to include -i option. Updated to include -i option. Zeroconf support. -R option causes usage print out when not running on OS X. Updated to mention Rendezvous support. Only deallocate dsd reference if it was allocated. lcksum -c sha1 now exits with correct status. Added check for zeroconf headers. Check if ZEROCONF is defined for rendezvous. Modified ReadMe to reflect new requirements. Removed obsolete OS X-specific files. Added .info file for package target. Modified package target to reflect new package creation. Added additional check for prefix usage to eliminate erroneous warning. Applied Makefile.in patches from Jim Foraker. Commented extra tokens after #endif to prevent gcc from complaining on Linux. Added instructions for building Mac OS X installer package. Fixed comparison bug: pointer incremented before comparison. Applied Makefile.in patch from Jim Foraker. Removes config.h in distclean target, and adds . to INCPATH. fsdiff doesn't need connect.o. Added basic progress tracking. Added option to display progress in iHook format. Added -v and -I descriptions. Removed unused variable and putc calls. Added fflush for progress output. Fix for increment inaccuracy when displaying progress. Removed reference to -I. Only print out progress line if percentage increased over last line. A wise man once told me never to pass floats on the stack. Don't worry about the monotonic sequence. Use mkfifo instead of mknod when creating named pipes. Added progress output. Bounds checking correction: >= MAXPATHLEN instead of > MAXPATHLEN snprintf bounds checking correction: >= MAXPATHLEN instead of > MAXPATHLEN snprintf bounds checking correction snprintf style changes: return value >= sizeof(buf) instead of > (sizeof(buf) - 1) Replaced %s-only snprintfs with strlen check + strcpy. Progress feedback based on bytes read instead of lines completed. Clear password when done with it. Make sure len is initialized to 0. -v now plays nice with -P. Report an error if the path we're given doesn't exist. Progress tracking for lcreate. Compile progress source. lcreate progress tracking. Progress tracking. Removed unused externs. Removed unused extern. iHook-style progress feedback for lapply. Higher verbose level check. Define sizeof off_t. Backing out change that should have been made in progress.c Make sure to get defines from config.h first. lcksum -A: verify radmind applesingle headers. Create prompts for a transcript name instead of naming all loadsets fsdiff.out Optional pre-apply and post-apply scripts. Merge added a second do_cksum call. twhich again prints out apply-able transcript lines. t_print can't be static if twhich has to use it. ktcheck -n exits when a k entry is out of date. Support for a default settings file. Better variable name. update function. Make the output a valid apply-able transcript line. Rolling back. Added -i option to force line buffering. Updated to include -i option. Real default dir checks. Make it work with defaults. t_print handles writing out the transcript name for us. Fixed handling of k entries in command files. Directories need help in twhich. Make trap on Solaris happy. Fixed mkdir usage for Solaris. Only trap useful signals. Server StartupItem understands start, stop, restart. fsdiff -P prefix: limit output based on prefix matching. Typo. Reverting to 1.4.0 versions pending modifications to fsdiff's behavior. ktcheck -C: remove files not in command.K from /var/radmind/client. Updated usage. Updated usage. update return value checked for socket/door case instead of exiting (bertrama at umich dot edu) Removed unused trailing slash handler. log level options borrowed from cosignd. New log facility and level options Updated usage for new syslog level and facility selection. First pass at cleaning up after failed update. lfdiff uses highest precedence transcript for path if no -T or -S given. Use pathcmp_case everywhere. Objects for lfdiff.o target. Grammar. Updated to mention cleanup of temporary files. -D description -D usage. Style improvements. Restored lfdiff default transcript changes. Restored changes. lsize: display installed size of transcript and command file contents. If TERM is unknown, offer to cat difference transcript for review. Spelling correction. Client dir pruning handles K-in-K correctly. Cleaner cleanup--yeeha!--of unused files and directories. Corrected -D usage. We pass a NULL fs pointer from twhich to t_print. Check to make sure local file exists. If precedent_transcript returns special, make lfdiff handle it. Better special handling. launchd plist for radmind. Added -f (run in foreground) for launchd compatibility. Fix for comparing paths with high character values. canna (84): changed chksum from an int to a string. added -c chksum type added chksum support Initial revision removed redundant arg to transcript() removed redundant arg to transcript() fixed // at root of file sys. bad sort order check needs to be smarter, ifdef notdef for now added pathcmp() added list of supported checksums to -V output an empty command file causes a core dump, fix: always create a null transcript moved the printing of names of transcript to happen only if there if a "+". Initial revision Initial revision added dist and version added decode for comparing transcript & filesystem. extended concept of "negative" to have special meanings for character special files, ie pseudo devices. a "negative" c will only have it's major and minro device numbers checked, as the owner, group and mode change as you log in and out. added comment describing behavior of negative character special files. added better negative explanation. fixed typo in comment about character special files. bug: once FS is exhausted, quit reading TRAN removed pi_dev from pathinfo struct as it was extraneous and buggy. no longer strip the trailing .T call it what you want. changed pathcmp() to take unsigned chars as MacOS X was unhappy. added -N to upload negative transcripts and set default timeval to 120 seconds. made timevals consistent and reset timeval before snet_close Initial revision libsnet and cmessina libsnet cmessina's big phat code chris and libsnet messina, yoomask added do_chksum() and create_directories() prototypes. chaned yoomask to defumask added sha1 chksum support fixed some spacing. very important. changed special to foundspecial added keyword() removed restart code added keyword() new names special_t() and command_k() changed protoypes to match new names special_t() and command_k() STOR no longer takes COMMAND or SPECIAL call tolower() to make sure name we get back from dns matches how special files are stored and conf files etc. fixed create_directories() so it doesn't NULL out our whole path. retr() now gets specials from the right place. changed -N to -T for upload transcript only added calls to connectsn() and closesn() to share code better in radmind changed do_chksum() to use the new syntax. removed do_chksum() to chksum.h for code share. -K now accepts relative or full paths. full and relative paths for -K transcript_init() knows about full paths. oh yeah, special files need paths too. added a definition of a transcript added that sha1 is the only supported chksum. do_chksum() now has it's own .h no perrors or fprintfs since radmind server uses this as well. do_chksum() returns something, and we check that. do_chksum() return values now checked, and handled. read wasn't checking for EOF in f_stor() verbose goes to stdout instead of stderror t_name is now t_fullname ( full path ) and t_shortname ( what's listed as a transcript name in command file ). can't use stat info if the file is not already present. create_directories() now mkdirs() changed create_directories() now mkdirs(). added a bit more logging to log_info applicable transcripts cannot be uploaded. Initial revision added make install, sweda :) typo, dih fixed bug where a change in permissions of a file in a negative transcript would result in the file's mtime being changed. fixed bug where a change in permissions of a file in a negative transcript would result in the file's mtime being changed. added flag to indicate. fixed bad sort order message to print to stderr. changed getopt to take -u for umask. changed DEFAULY_MODE to 0666. oops! doh! better error message if host not in config file. lapply didn't want to apply new special files. ( MAXPATHLEN * 2 ) - 1 check fixed the spelling of special :) typo on error comment. too bad my proof-reading failed. Another misspelled error. with-radminddir is now the help text as well as how you configure an an extra space, the horror! colesse (23): Initial revision changed return values and the loop for deciding which transcript to use for the compare. Initial revision Fixed the struct entries. added comments. added comments added code for creating a dummy transcript if there aren't any. Added some of the buffer checks. *** empty log message *** added buffer overflow check for encode. added a code.h file for function prototypes added prototypes. made a bunch of changes. added function prototypes. added some #defines for the flag. changed the prototype of transcript_init. changed the skip code to be a flag (as well as rpath). the flag is now passed to transcript as well. added a flag in order to determine whether or not to open a command file if you're using the -1 option. added #define's for the flag. made changes for compatibility with linux. changed the free statement to be correct. changed transcript_free to be correct. commented out the major and minor items from the info structure and added a dev_t type. editor (654): Initial revision Read over code. Added to transcript parsing. Got deletes working. Started to work on update. started to work on deletes for updates started to work on recursion Set up recursion ready for code review Ready for more testing Ready for first real check in Added network code. Ready for download. Added download code. Download works with big files ready for some major test Broke code up broke out update.c Initial revision fixed goto bug. cleared out include files cleaned up include files fixed continue bug Initial revision Initial revision Added alloc and free added new version of acav_parse Fixed reentrant bug. Fixed ACAV reentrant bug / updated verbose Initial revision fixed reentrant issue Initial revision made it more generic Corrected comments corrected error messages Initial revision fixed create special Initial revision Initial revision lcksum.c added comp count Made corrections per code review. Changes per code review added linenum to error message Initial revision removed create_directories code Initial revision removed network code and added calls to connnect.h combined getstat.c code Initial revision added chksum.h added prototype for get added copy.h Correct verbose messages Initial revision removed #include "getstat.h" added -s option to force new files that are hardlinked to be renamed removed -n option Initial revision -K option works now with relative paths Added version.c changed get to retr added path to functions for relative paths added external version using new retr to get files added extern version added extern version Initial revision corrected usage changed return value to -1 Using correct error codes for do_chksum correct timeout value to us global value added external timeout values added extern timevalue added remove to data structure add -n option create one fuction that gets file from server and returns a pointer to its added rename after retr removoed download() Using new version of retr added -n to usage able to use absolute paths now transcript now given without -T fixed -n error and verbose added snprintf using mkdirs corrected verbose corrected verbose - d DIR problem fixed using new version of download download now takes an optional location retr now takes optional location using new version of retr with NULL location first attempt at man page Initial revision corrected verbose added lfdiff set system error exit code to -1 cleaned up verbose added -f to filter files that are checked updated lmerge added -f added copy( ); added stdin support using stdin from stdio.h add -u to set umask update set to default Initial revision Initial revision removed -T option Initial revision update Added lfdiff.1 and default host to lfdiff.c updates per wes added default host Added checksumlist string combined socket and door into one case update per wes Added default host ( rsug ) added checksumlist from version.c file added default host added default host added default host added default host Update to lmerge man page perwes. added file size check Added all man pages cleaned up )'s & ('s added pathcmp.o for lapply init global transcript to { 0 } Updated man pages per Bill lapply now updates temp file before renaming it. Removed strdup from path order checking command file is now listed in the Makefile and written into fsdiff and ktcheck Made /var/radmind/client/command.K the default command file. Correct no-network error. Updated error message. added sort order checking lmerge now skips blank lines and comments. Updated .h files. Changed lchown to chown. Removed unused optopt in lfdiff.c Added -h to options list -K option now defaults to ./ if given no path. Added cast so size would get sent to the server when running on darwin. Added encode( path ) so lapply can get files with white spaced names. Download now takes a temppath[ 2 * MAXPATHLEN ] and returns an int. Corrected do_chksum error reporting. Added access check for -f transcript Update -f description to include access call. Added sort order checking. Added TODO and FAQ for centralized notes Changes from 1/3/02 conversation with Wes. Updated man pages to have correct SEE ALSO section. Added twhich. Brought lifo.c code inline. Removed lifo.c and lifo.h from twitch. No longer needed. Added new command file code. Corrected sssize_t error in chksum.c Changed #include "snet.h" to #include <snet.h> ktcheck updated with regular file info. Changed all rr's to ssize_t to match read return value. Added new command file code to fsdiff. Removed recursion. Working version w/o recursion. Added support for names with spaces. Added support for spaces in file names Skip files marked with a -. These lines are just output to new tran. Added support for spaces in file names in lmerge updated TODO Skips comments and blanks lines Added check for min of two trans and a dest tran when run w/o -f. Defined MIN and MAX for solaris only. Removed unused #define MIN()'s. Added -s option to search server command files added -s information. added -s option to run on server Updated twhich in make file to work with -s Added -q for quiet mode Added -q option to ktcheck, lapply, lcreate Correct verbose. Update missing checksum error message. Added -q and new -v options Updated usage message updated usage message Changed sprintf to snprintf added linenum def to support new download code changed sprintf to snprintf Updated verbose messages changed sprintf to snprintf changed sprintf to snprintf Changed sprintf to snprintf changed sprintf to snprintf Added files for HFS+ support Small corrections. Removed bprint.h Added apple file support Added support for apple files. Added support for apple files Removed bprint Added support for apple files Changed sprintf to snprintf Added support for apple files Correct cvs merge error Commented out afile.h and send_afile.h so it will compile on sun Removed OSNAME=. Now using sun and __APPLE__. Updated #define SOLARIS to #define sun Updated #define DARWIN to #define __APPLE__ Changed #define DARWIN to #define __APPLE__ Removed debugging code. Updated #define SOLARIS and DARWIN to #define sun and __APPPLE__ Small fix to get code going on darwin Added do_achksum( ) for apple files File size is now sent to the server in send_afile Removed debug message. Cleaned up #ifdef __APPLE__ code Added #ifdef __APPLE__ around send_afile( ) Added '\m' ( CR ) support Renamed download.c download.h to retr.c retr.h Renamed download.* retr.* Made chksum inline during download. Renamed download.* to retr.* Added support for carriage returns ( ^M ) ( \r ) fixed snprintf bug *** empty log message *** Moved send_afile* to applefile* Fixed snprintf error. Corrected error message when open fails on transcript. Combined lstat into getfsoinfo( ) which now calls t_convert Using new getfsoinfo to get file system object info Removed old version. New version of applefile Added chksum.h Changed buff size to 8192. Using new applefile.c Using new applefile.h Corrected retr_applefile usage Using new applefile.h and store_applefile( ) Using new applefile.h Added case 'a' for applefile getfsoinfo now updates size of 'a' files Updated error messages to show decoded path names. Replaced afile* with applefile.* Added finfo buf Added verbose and chksum to match applefile code Correct file download checking error messages use decoded path name Correct retr_applefile - was not accounting for all byte read from wire Added \n after last set of dots in retr -P now decodes the transcript path before checking prefix Made rsrc_path static Removed _RADMIND_TRANSCRIPT_DIR Removed #ifdef __APPLE__'s Fixed getfsoinfo call -- now passing temppath Cleaning up verbosity Corrected my do_achksum error - was not calling from fsdiff Added support for apple files. Added #ifdef __APPLE__ around apple code with stub function for !__APPLE__ Correct Makefile for sun Moved umask() call into getopt loop. Added cast mode_t to umask() call. Removed unused variable fullpath. removed all #ifdef __APPLE__'s Added store_applefile() stub function for non-apple builds. Added stub functions for retr_applefile() and chk_for_finfo(). Removed all #ifdef __APPLE__'s Added applefile.o to LAPPLY_OBJ list. Removed auth.* per 2/8 code review. Remove location from retr_applefile( ). Removed auth.h. Updated retr( ) call - removed NULL location. Changed convert to radstat. Updated retr( ) call - removed NULL location. Updated retr( ) call - removed location. Changed convert( ) to radstat( ). New t_convert( ) files. Changed t_convert( ) to radstat - removing old files. Moved attrlist local. Now passing finfo_buf into do_achksum. Added local attrlist. Added local attrlist. Using new do_achksum - passing in finfo. Updated do_achksum( )_ stub function to take finfo_buf. Added #ifdef __APPLE__ around apple headers. Changed convert to radstat. Renamed store_applfile.c stor_applefile.c. Added #ifdef __APPLE__ around apple includes. Using new stor_applefile( ) function call. Using ae_ents array for as_entry info. Updated #ifdef __APPLE__ Added applefile.o and connect.o to all needed files. Added sort order checking. Changed all chksum's to cksum. Added -C option for producing createable transcripts. Made dodots global. Corrected dodot's type-o. Updated cksum info. Added -c option. Updates per 2/8 code review: stor() and stor_applefile(). Added new checksum code. Added make package option Correct tab's stor_applefile() correction. Corrected linking problem - was not decoding target path. Moved stor_applefile.c into stor.c. Removed unused file. Updated make package permissions Correct tabs Update per 0.6 code review. Added radstat.o to lcreate. Changed default to -NO- for startup make package now creates the ../radmind.pkg package Update make package Removed -Wstrict-prototypes from default CWARN Correct snprintf to check for LEN - 1 not just LEN Corrected closing of rsrc fork in stor and chksum code Adding default command file. Update now takes a stat struct lmerge does "- d..." properly now. Created meta package. Case size to (int) on server Removed unused TRANSCRIPTDIR in twhich. Removed -c option. Added directory structure creation. Removed double error message when retr failed. Removed exit( 1 ) on errors. Server now starts if VARDIR/config file is present. Updated license info. Updated address. Added section 8 man pages and automatic variable updates for man pages. Changed default directory permissions to 0750. Updates per wes. Removed apple-pos.T to match tutorial. Updated description. Default apple negative transcript. Added man page parsing to install. Autoconf files. autoconf default install script Not ready for autoconf yet. Added #ifdef __APPLE__ around unused variables. Added missing close. Correct cksum on special files. Correct special file errors. Corrected file size error. Corrected bugs. Added version to .info files. Added _VERSION_RADMIND to .info files. Added applefile.5 Corrected make install error. Added MPKGDIR for mpkg creation Updated documentation. Made solaris default OS. Added CVS dirs. Removed ./libsnet/profiled Added bug listing about min. size. Cleadned up formatting. Added list of supported checksums. Files needed for autoconf File needed for autoconf File needed for autoconf configure script from autoconf Updated autoconf files. Removed warnings. Removed -Werror Added snet and ssl check removed unused break Removed Makefile. It is now create by .configure Updated configure via autoconf Updated configure Commented out ./private/var/tmp to match the tutorial Added comment about what '#' are used for. make clean now removes autoconf files. Not using autoheader. No longer need file. Added autoconf files. Removed Makefile from dist. Added make dist clean to remove autoconf files Added missing "#" to match tutorial Added support for IP's in config file If checksum is missing ( "-" ), mtime is not updated. added herror to AC_CHECK_FUNCS() lcreate now skips comments and blank lines. Tutorial on how to us radmind on OS X The server now checks that lines in the config file have only 2 arguments. Removed -n for lcksum on negative transcript Removed perrors from do_acksum to match do_cksum. Changed printf to fprintf for unkown type error Removed duplicate error message for failed update() call. Updated error messages to actually say something useful. Removed -v option - verbose is now the default. Updated retr_afile() error messages. Added missing ; Updated error messages. Renamed apple-neg.T negative.T Moved tutorial.txt into a word file. Added errno for do_acksum stub function. Correct snet_writef call to check for < 0. Changed apple-neg.T to negative.T Changed apple-neg.T to negative.T Set errno for retr_afile() stub function. Error reported on invalid config file lines, but now continues through rest If rsrc fork len = 0, we skip all rsrc related code. Added check for invalid transcript lines with one field. Updated error messages. Added --with-server=SERVER to configure System errors now exit( 2 ). Updated error message for rename. Work around for mkdirs problem on OS 10.1.x. Added 077 umask to startup script. Added <sys/param.h> for HP support Changed /var/radmind permissions to 700 for OS_X installer. Removed perror( "" ) call. Moved -l after -L for sun. Cleaned up formatting. Changed retr*( to take a ssize_t for transize so we can check zero length files. Changed retr*( to use ssize_t fixed bug where lmerge would try to remove dirs that had no files and therefore Removed -Wconversion Corrected some type-o's Corrected error message type-o Added fix for borken mkdir on OS X. Fixed OS X mkdirs bug. Again. Added -v option to support old scripts. Verbose is now the default making -v Updated list. radstat now returns finder info for directories on apple. Changed pi.afinfo to pi.pi_afinfo Added support for directory finder info. Added #ifdef __apple__ around null_buf to hide from non-apple systems. Added support for directory finder info. Moved apple only int's into #ifdef __APPLE__ block Correct memcmp call. Fix for d finder info Update to dir finder info. Updated TODO and HISTORY to reflect finder info. Removed case statements for stor calls. Updated error messages. Corrected type-o's for apple code. Removed. Added call to snet_eof after snet_get_line_multi to test for connection being closed. Added chanes file to list Adding wildcard code Removed unused #include Checking for transcript access. Added wildcard support to server. Added transcript check for stat and retr commands. Sort /private/tmp Fixed bug where missing dirs with finder-info would not be created. Removed unused mkdir from make install. reverting to version 1.18 CHANGES file Added full history for versions 0.6.0+ Correct file access checking fixed lmerge -f problem with new mkdirs fix README file added 0.8.0 changes Corrected type-o Corrected 0.8.0 changes Added access checking changed strings.h to string.h and removed stdio.h Commented out extra #endif tokens Added 0.8.1 changes Added directory finfo Added wildcard info Added upgrade path added int cast for isdigit Added 0.8.1 change Clarified special.T verification process Added -D path option to specify radmind directory Added -D option removed in-line exits updated exit codes Without checksum, special transcripts are always updated. Updated special transcript info. Updated special transcript info. Fixed ( yet another ) lmerge -f/mkdir bug Corrected hard coded config path problem. Removed all hard coded paths. Changed server timeout to 60 minutes. Server now creates _RADMIND_PATH/special on startup Correct error to say "-A..." not "-T..." Updated LOG_INFO to LOG_DEBUG for retr, stor, and stat. Changed LOG_INFO to LOG_ERR for invalid client. Added special file info. Added numbers to generic errors. Updated closesn error messages Changed return values. Cleaned up error cases Updated exit values 0 = okay, 1 = error with change, 2 = error w/o change Removed extra spaces from error message Updated exit values Removed snet check from autoconf along with some other unused checks. Added -L option to usage Updated options to include -L Moved change log to web page removed extra variable Updated upgrade path and added change log link. removed snet.h Added TLS support Added TLS includes SSL Update. TLS level 0, 1 and 2 working Fixed recursive make clean Added TLS support Shared TLS code. Removed #ifdef TLS Fixed TLS autoconf problem on solaris Moved man pages into man/ Changed STARTTLS to STARttls to match other RAP comamnd Added directory structure and RAP info. Added more options to configure Removed rm man Fixed location of cert dir for radmind package Made location of default certs dynamic. Correct -w 1 access Fixed man page CERT info Added TLS info. made default cert location certs/XXX Change client tools to use /var/radmind/certs/XXX as default location. Removed configurable cert location Updated date. Cleaned up format Cleaned up formatting Server returns error message if STAR is given but TLS not offered. authlevel 1 now checks command file on STAR command Moved chdir to before TLS code so it cand find certs fixed CN based special files Server CN must match host string. Updated error message for when TLS is required by server but client did not start. Fixed special dir selection to pick match from config file. Don't print server's CN when it matches host Updated "Now access" to "No access" Corrected spelling mistake. Added info on CN matching in config file. Changed -f to -F Added -F option Removed .Phony Fixed --with-radmind-var= Added pam code Cleaned up PAM code Added -U option to turn on PAM user authorization. Fixed PAM_POMOPT_ECHO problem. Removed some syslogs. Added -L flag for user auth. Changed some error messages. Tarket of links ( 'l' ) listed in negative transcript is ignored. Check to make sure transcript is a regular file. Added comment. Added info about negative symbolic links Corrected usage. Added -U information. Added -L, -P, -U info Checking for TLS before sending LOGI command New version of libsnet that uses libtools EXCLUDE Give error if no transcript given for '+' Setting warnings based on compiler. Changed open to O_WRONLY -l pam is only used for the server. Updated Description Updated description Adding tutorial files tutorial negative files Added server-var.pkg Removed FAQ Moved TODO to web page Changed summary to match usage. Changed usage to use create-able-transcript With -f option, lmerge will continue if a directory is not empty - transcript '-' lines are preserved. added BSD ERRNO for dir not empty re-ran autoconf '-' lines are now parsed. Made PAM optional. Only require transcript name for downloads. Added tutorial-negative.T to server files. -lpam set by ./configure for non-pam builds. Added support for special applefiles '-' lines are printed without '+' Fixed '-' line formatting Added SystemEntropyCache Disply pathdesc along with line number on error. Removed O_EXCL for temp file open so we can overwrite files temp files Made 10.2 required for the server Updated copyright info. Updated copyright info Added copyright info Updated copright info Added reference to radmind(8) for STAT command -n now verifies the checksum listed in the transcript Fixed logic so stat returns correct info on special files and not always the default. Changed -L option to -l. Fixed -L getopt bug. Removed -P option Corrected type-o Removed upgrade information. Added /Library/Logs and /System/Library/Extensions.kextcache Added info about encoded path Autoconfed default authlevel. If special.T is wrong -n now reports update needed. Included sys/time.h for linux 2.2 -N does not check for file Ran autoconf for solaris Removed randfile code. Added comments to randfile code. Added -r to use randfile. Added connection limit. Fixed authlevel bug Removed multiple messages to client. Changed default connection limit to 0 Added meaning of 0 max-connections. Added check for neagtive maxconnections. Changed intermediate "Ready to start TLS" to 320. Added intermediate "320 Ready to start TLS". Removed intermediate "320 Ready to start TLS" message. Making STARTTLS more SMTP like. Removed second #include <sys/time.h> Removed second #include <sys/time.h> Added missing "\r" to STARTTLS command. Added /automount and /private/Network for automount ( Thanks to Nathan. ) Fixed connectionmax bug Fixed --with-authlevel bug. When using keyword FILE, we now check for leading "../" and internal "/../" Added -r option to usage statement. -n implies auth_level = 0. Fixed special file location. Giving error message to client when PAM fails. Cleaning up error messages. removing more files for distclean Moved Apple only int into ifdef __APPLE__ Added support for files with " " in path. Added that path must be unencoded. Added preflight script to backup config file. Cleaned up package Mentioned where the Assistant can be downloaded from. Moved man pages to /usr/local/man Updated retr_applefile on non-apple systems with large file info. Adding config.h for largefiles. Added config.in Listed TLS as a requirerment for -l. Autoconfed SIZE_OFF_T Using u_int's. Added stdint.h for apple compile. Replaced stdint.h with inttypes.h Change stor_applefile to use PRIofft to tell server size. Added green brain to installer. Options audit. Removed -T option. Using pathcmp instead of strcmp. Warning if prefix isn't found. Cleaned up man page install Applefiles can now have only rsrc fork. Added support for negative length applefiles. Creating rsrc path for checksum Creating rsrc path for checksum. Added info on negative applefiles. Always checking transcript size against actual file. Added -F option. fitterhappier (82): Compression level option is -Z, not -z. Fix for pre- and post-apply script checks. Feature request 1421762 - ra.sh prompt changes Support for alternative fsdiff root paths (thanks to Jeremy Reichman). Added -r usage. [Bug 1438290] Fixes for AppleSingle support on i386. Patch #1448910: lcksum multiple transcript (CLI wildcard) support Using pathcasecmp so -I is respected. twhich -r: recursively twhich all path elements (patch #1470196) Added -I option to summary; fixed typo. Added -I for case-insensitive sessions. Fixed exit values broken by patch #1448910. Exit with 1 if verification fails. Committing [ 1488099 ] Additional wildcard patterns Updated manpage for wildcard patch #1488099. Accepted patch 1562455: Use Updated DNSServiceDiscovery APIs. Restored -R option with note of deprecation. Fixes for Gab's ra.sh update infinite loop. Check return value of read to handle EOF. Fix typo processing postapply scripts. EOF means No. Respect TMPDIR environment variable if set. Thanks to Hauke Fath for drawing attention to it. Accepted [ 1443298 ] client status reporting. Accepted [ 1443298 ] client status reporting repo man page Added repo.1 to MAN1TARGETS A function is not an if block. Fix double message reporting if message is only one argument. [Bug 1677170] undefined reference to DNSServiceRegister on Suse Linux 10. -R is deprecated. -B is the new wave. Clarified -B usage, removed -R. Fix: Bug 1728520: ktcheck -C does not work with -K. Fixed manpage rendering issue which hid some of the text. Merged radmind-1-10-0-exclude-branch. Clip trailing slashes like fsdiff. Skeleton code for local installs/pkg creation from a transcript. [ Patch 1833304 ]: Legacy port failover. ra.sh up, shorthand for ra.sh update. Correct parenthesis location to fix -C. Default port is now 6222. Failover in connect.c handles legacy port (6662) servers. Daemon's default port is now IANA-registered 6222. [ Bug 1839610 ]: lcksum fails with -a and missing files. When excluding, only move transcript ahead if we're not at t_eof. Shouldn't call transcript_parse at all after excluding. Improved port handling. Removed unused variables leftover from getservbyname cleanup. Silence warnings about mkprefix argument pointer types. Fix: [ Bug 1856125 ]: Exclude File Names not escaped. Clarify current exclude behavior. Moved out of contrib. Not part of default bin targets. Moved out of contrib. Added t2pkg targets. Not part of default bin build. t2pkg builds successfully without hacking now. Use >= MAXPATHLEN instead of > MAXPATHEN - 1. Fix universal binary builds on Mac OS X 10.5. Fix universal binary builds on Mac OS X 10.5. Added datarootdir variable to make autoconf >= 2.60 happy. Reorganized universal binary setup. (Mac OS X) Feature request 1834497: symlink ownership should be recorded and set. Universal binary builds on OS X 10.5 can now run on 10.4. Accepted patch [ 1919220 ]: Includes in config file. If PAGER is set, use it to display difference transcript. Accepted patch: [ 1716642 ] ra.sh "alternate root" option Standardize snprintf return value checks. Standardize checking of snprintf return values. Centralize sanity checking of events to report. Fix regression causing twhich to print out bad special file lines. -I fully implemented in ktcheck, fixing case-insensitive special.T. ktcheck 1.12 supports -I, so pass it to ktcheck if given. Document ktcheck -I. Eliminate weird casts for string parameters in retr calls. Silence compiler warning by removing unused variable. Removed redundant openssl header check. radmind.org Script to retrieve libsnet from SF.net. Instructions for building from CVS. Added REPO spec. Fix leak. Fix [Bug 2038036]: Infinite loops possible in fsdiff with excludes. Special files are unaffected by exclude patterns. Temporary storage of errno doesn't need to be a global. Proof-of-concept code using Apple's FSEvents API. Can be used with fsdiff. Fix bug 2541171. Patch from bawood at umich dot edu. Eliminate old workaround for broken mkdir on old versions of Mac OS X, since ourcode assumes at least 10.4 to build in the first place. mcneal (172): Original version from mdw@... Removed -L/usr/lib from SSL search path. Moved -D macros from Makefile.in into config.h.in Added -R to usage. Give error message if -R is used when Rendezvous is not supported. Fixed -Lpam Added known issues section and listed "hfs_bwrite" problem. Checking for EOF on snet_read. Dump connection on bad read in f_stor. Setting errno to zero on EEXIST. Removed erroneous else. Corrected syslog type. Added /usr to openSSL lib search path. Added arpa/inet.h for htons. openBSD port. freebsd port. f_noauth exits so client will get error message. f_notls exits so client gets error message. Removed -v from usage. Added -F to usage. Skip blank lines and comments. decode returns NULL on overflow. Using new decode(). Added warning for SSL issue on red hat 9. Fixed type-o remove a directory and all its contents. Added rmdirs and re-ran autoconf Removed mkdirs.h Using dirent. Added support for transcripts in directories. Using rmdir for directory removal. For special files, print listing from special.T Fix for broken readdir() on HFS+ Added support for directory structure. Added support for directories. Needs testing. Transcript root code. Added limits.h for solairs. Thanks Paul Dlug for the patch. Updated usage. Thanks Scott Hannahs for the suggestion. Updated usage and clarified -v option. Thanks Scott Hannahs. Display date of build in man page. Must run ./configure to update date. Removed printf's. Using PackageMaker. Fixed missing fsdiff in package. Saving output of static decode buffer. Removed debugging printf. Using correct path when checking for objects in a negative space. Added version to package name. Fixed package permissions. Files for packaged on 10.3. Display type of transcript matched - positive or negative Removed extra transcript_parse that moved passed unwalked space. Using correct path for unlink when using -f option. Special files are now located in the same directory structure as the command Updated make package info. Only setting man page date when make dist is run. Propper handling of directories on OS X. Moved X case to transcript Added X case on rastat failure and fixed negative dirs on HFS+ Changed 10.2 requirement to 10.3. Removed unchecked decode. Checking return value of encode. Checking for / in path name. Fixed pathing issues. Fixed non -f case. Updated path-description for optional <command-name> argument Cleaned up list API. Using new list API Wait for disks. Thanks Justin Elliott Corrected use of snprintf. Not doing buffer or string copy. Corrected list logic Corrected break logic in f_retr Corrected use of check_list Removed unnecessary assignment. Rolling back to previous version. Freeing memory on error. Removed extra text. Thanks Scott Hannahs Display info on time update and only calling strtol once. Added more exit code checking, and support for -w and -h. Type-o Installing ra.sh Removed decode from stor_applefile. Passed in path is already decoded. Corrected type-o. ( gelle ) sprintf and strcpy audit. domain name matching is case insensitive. Code cleanup CN matching case insensitive Using ANSI C function prototypes Using ANSI C function declarations Search transcripts listed after a special file. Using stat instead of access Fixed special.T searching. Adam Bisaro <adbisaro@...> Added -a option that specifies the address on which the server should chmod after chown to preserve S_ISUID and S_ISGID mode bits Corrected type-o. Using portable signals and added -t option to retain tmp files Added info on -v and -vv Fixed building on solaris. Check for SIZEOF_OFF_T command file in a command file support Using new logo in package. Updated usage. Setting correct verbose value. Added -% added -a to check all lines with -n Using correct verbose value. Support for -% 1.3.2 merge Updated usage. Removed duplicate change variable. Updated with k-in-k info. Moved daemon's tls code in tls.c so it can be shared. Corrected bug that would cause an exit if an included command file changed. mkdirs checks if directory exists before returning error. Lapply warns and continues on sockets. Fixed problem where -f would print '-' lines from the low transcript twice. fixed twhich on the server. Added support for alternative names in certificates. Thanks Maarten Thibaut. Fixed double free. Clip trailing '/'s. type-o Added -A and -a. '/' clipping fix for root path fixed some minor bugs Checking for pre and post apply directories before running scripts Autoconfed mktep and pre and post apply dirs type-o -U requires auth-level > 0 Cleaned up -w usage Display tran name on non f and a lines. Only list a special file once. Include command file name in transcript struct Print name of command file as a comment List transcript change for every filesystem type in apply-able transcripts. Remove .#* on make dist Only list transcript name for PR_TRAN_ONLY, PR_DOWNLOAD, and PR_STATUS_NEG Using PR_STATUS_MINUS to display '-' lines Added -D option to support alternative radmind root dirs. PR_STATUS_MINUS using correct value Added ra.sh checkout/checkin ra.sh -V displays version Added default transcript name on create Certificate Authority Shell Script Cleanup build on pre-3 gcc (e.g. NetBSD 1.6) because of a C90 vs. C99 issue. Send 240 and size in one write for RETR Disable NODELAY on f_stor and portability improvements Fixed bug when using KinK with -n Print the transcript name of lines beginning with a '-' Do not checksum negative files. Added support for case insensitive file systems. Fix for case insensitive file systems Spelling correctiong. Thanks Noah Abrahamson Production version of lsort Fixed lfdiff build problem. Added missing case_sensitive. Added -I for case insensitive compare Checksum optimization fix and small memory fix. Fixed -n -c size checking bug with apple files. Added helpful error messages for switch errors. Checking sort order on all types, not just A and F lines. ra.sh man page Install rash.1 Disabled SASL for LIBSNET build Fixed 10.4 build warnings Type-o corrections Cleaned up pathcmp code. BUG #1289097: Using umask() to set safe umask on temp files. BUG 1314089: Added lsort man page. Added lsort man page. lcreate -n correctly checks files. BUG 1337768: Pre and post apply scripts only run if direcotries have contents. FIX 1352578 - Added CN to special file path description Sean's fixes for blackops use. umeditor (96): Patch 1384555: Added support for distdir. ( Jose Calhariz ) Patch 1384558: Give warning if tmp exists. PATCH #1408441: A patch to the radmind.8 manpage explaining the numeric range. Including config.h Added support for compression from Maarten Thibaut's patch. Added support for compression from Maarten Thibaut's patch. Removed "\n" from User time syslog message. Added information on compression. PATCH 1420980: prompt for username when USER=root ( Sean Sweda ) Follow symlinks for make dist. Add support for inline comments in config ( patch 1420950 ) BUG 1346368: Don't duplicate - lines with -f Closed comment. OS X packages create /var/radmind/client, postapply and preapply [ BUG 1372729 ] twhich can now handle null/empty transcripts. BUG #1429169 ktcheck not sending quit if special file is created. Added <sys/resource.h> for portability. Moved MIN and MAX defs into config.h.in for Solaris 9 compatibility. Pass correct configure args to libsnet for ssl and sasl ( bugs 1435999 ) Adds the -U flag to ra.sh for overriding $USER. ( patch 1435665 ) Using #ifndef for MIN and MAX rather than #ifdef SUN to be more portable. Corrected call to compression stat code. From 1.6.0-branch: Fix for non-apple compile. Removed ru_inblock and ru_oublock as they are not defined by POSIX. [BUG 1476399] ra.sh now displays correct version number Added --enable-universal-binaries ( Patch #1489787 ) Fixed package creation with dated versions. Updated URL. Updated URLs and bug reporting information. Ran autoconf Using exit to be constant. Return correct exit value when checking multiple transcripts. Do not checksum files that are going to be removed Corrected type-o Formatting. Centralize v_logger code Fixed formatting and type-o Including applefile.h. Removed configure.ac from exclude list so it will be included in source Made capability reporting more granular. This will allow for support of Added check_capability to generalize capability checking. Removed extra "\r\n" from zlib capability reporting. Server lists REPO as a capability. fsdiff displays the command file name when reporting command file line errors. [ Feature #1592739 ] Fixes the find argument to something which will work with both older and ra.sh auto now does pre/post apply. Default rap_extensions to 1 so REPO is offered by default. Exit immediately if server returns 5xx to avoid cascading error messages. Exit on all non 2xx server responses to avoid all cascading error messages. syslog() before snet_writef() to avoid SSL_write's errno resetting. Log server response when run in verbose mode. Clarified the usage statement. Removed new verbose log; snet_getline_multi already does it. Moved syslog calls before snet calls so correct error is reported on server. Added details on REPO's log format. Updated copyright info. Report error if server does not support reporting. Verify that event is a single word and has length > 0. Cleaned up formatting of STAT section. Updated copyright information. Exit after displaying version number. Clarify that events are limited to a single word. Corrected format of debugging information. Corrected spelling mistakes and fixes inconsistent use lapply doesn't attempt to report when run with -n ./configure searches for the echo binary for ra.sh. This is to address Added list_remove(), list_size() and no longer printing head and tail info Added support for minus transcripts and special files in command files. Fixed bug where included command files not already on the client Corrected reversed logic of dealing with minus lines. A port to HP/UX. Using correct variable name when substituting path to echo. Added information on minus lines. Removed unset string from error message. Added -C option to lapply that will create missing intermediate directories. Added -C option to lapply that will create missing intermediate directories. Added missing include for mkprefix.h Added -P option that allows you to specifies a directory that contains make dist will now create a compressed tar ball. make package will now create a compressed tar ball. make package now removes temporary file. Allow directories to have 5 or 6 arguments on all platforms. This fixes a bug Set correct paths for TLS related files. [ Bug 1785746 ] No need to call htons() on ports found by getservbyname as they are returned in network byte order. Fixed a bug where ktcheck -n would not report the correct information or Added -r to usage statement. Added -T option to only merge transcripts, not files. [Patch #2014521] If path doesn't contain a directory, canonicalize it by prepending "./". Add -e to ktcheck and lapply to allow changing the event type that is rerorted Automatically convert paths between absolute and relative paths based on the Using correct variable; twhich doesn't have a path_prefix Reordered path_prefix cleanup code to better deal with relative paths. Add -e to ktcheck and lapply to allow changing the event type that is reported Allow Radmind to be installed on non-boot volumes. [FEATURE REQUEST #2025217] Only use $USERNAME if $USERAUTH is enabled. wes (147): Initial revision added hardlink() removed "mode" from links free hardlink data structures added code to free hardlink DS added free routine for hardlink DS use while() in free routine fixed white space hardlink_free() exit on errors to reduce user confusion change head to dev_head fixed white space changed struct info to struct pathinfo changed struct info to struct pathinfo changed struct info to struct pathinfo improved error messages changed -t to -T to match .T much improved format of transcripts removed flag made skip global made skip global added global for chksum added printing flags formatting only call do_chksum() when chksums are on use version.c instead unified makefile consolidated references to do_chksum() temporarily fixed bug calling snprintf() fixed output() function to send verbose output to stdout temporary fix removed t_convert() and pathcmp() which are shared short name for special transcripts should be special.T um, only one rename... added hardlink_[gs]et_changed to detect when the target of a hardlink has changed. merged hardlink_*_changed into hardlink_changed() moved hardlink_changed() code into the 'f' case fixed daemonize code (wasn't creating correct env) change default file permissions to 0666 added flushing of '.' to retr() move path to gnu diff to Makefile made dots consistent and don't do them when not on tty default dodots to off fixed patrick's wanky mistake. dude... heard of "make"? fixed checksumlist. another broken bit of code from "The Editor" out of not normally checked in files covert to CVS-style make dist "the standard {in,out}put" not standard out, stdin, or whatever else add nice comment line wrap issue fixed includes removed reference to lifo.o removed nasty Makefile -DSOLARIS, use -Dsun gcc predefine fixup makefile to move need-to-edit lines to one section fixed null termination bug in base64_e reorg Makefile so it's easier to determine which compiler to use replaced spaces with tabs added a little code so that output of twhich can be passed to lapply move gethostbyname() call to a point when we could reasonably give an made the "continuing" message more clear lapply: initialize next pointer in create_node() fixed -K symantic in fsdiff oops added support for minus lines as input to fsdiff fixed multi-line string warnings spelling error missing include files removed annoying, pointless, confusing message added (temporary) code to work around bug in MacOS X 10.1.4's gethostbyname() changed perror to use path of just stat()ed file, rather than the name of missing includes slight code reorganization, to prevent access of uninit variables forgot about profiled/CVS... added our "complete" todo, along with rough timelines simplied directory finder info code warning reduction need to set finderinfo in transcript to null if there's no 6th arg fixed make install for use with autoconf fixed some typos fixed wildcard prototype removed done stuff first pass at CLI error remoting model better user feedback further explained "sort" once more with feeling typo added -L logfacility option to radmind server clearer formatting when the server closes the connection, it should say that it is don't need that sprintf() moved verbose before potential error better message when cert name is wrong added param.h for FreeBSD, per Jim Zajkowski removed optarg from -d, per Jim Zajkowski blow away file flags, if lapply wants to change something sometimes accept returns bogus errors rearrange line scanning code to get an arg count check reduce most timeouts to 1 minute, to encourage recovery from network failures noted a memory leak and its solution Fix for serious bug: when the FS has a directory, and it matches check the number of args on 'a' & 'f' lines fixed 0 vs NULL removed pam const declarations added sys/time.h for certain (old) Linux flavors added largefile support added support for long long & negative numbers made radmind protocol 64 bit added support for non-largefile systems typo... more largefile changes use the posix types, rather than the bsd types prevent 2nd error is the uploaded file is too large missed a largefile strtoofft() added a minimum to the number of args for transcript parsing added (notdef) code to ignore a trailing '/' added force flag to ignore size & checksum errors fixed "typo" preventing compiles on anything but Mac OS X improved ktcheck's interactivity hosts are found in the config file, not the command file .... added pipelining to lcreate fixed up pipelined store stor now returns so lcreate can retrieve any server errors added debugging for pipeline problem ugly bug in snet_read() here's an even better way exit on local errors only return from stor_*file() routines on error if there's a network error better %-done in fsdiff removed (one) redundant pathcmp() first pass at positives in negatives only use pathinfo in transcript.c Fixed the cases where radstat() fails. Might specifically check made mode changing code common added ra.sh after much harrassment from colleagues don't allow '-' lines to escape transcript_select() break '-' loop typos & compiler warnings moving the chmod & chown code around broke the not-updating 's' files that don't exist missing path run chmod if we chwon an set-uid or -gid file typos in comments & error messages make fix & typos (from Matt & David @ Columbia) Readability improvements to transcript_select. second pass at limited fsdiff ischild() is true when child & parent are equal added TCP_NODELAY to client & server move checksum code to print routine wescraig (11): fixed bogus line continuation first pass at improving speed of fsdiff without checksums fixed typo Fix for [ 1479940 ] Mistaken comparison of function return Accepted [ 1440091 ] cksum.c, retr.c, stor.c: compilation warnings on base64_e() Accepted [ 1440073 ] fsdiff.c compilation warnings on missing const qualifier fixed a loop in auto when ktcheck finds changes but fsdiff doesn't added an optional path to update & create updating sub-make methodology to POSIX added undocumented USE_ASCII define for Mac OS X fix bug in {} matching reported by <larkost@...> ----------------------------------------------------------------------- hooks/post-receive -- radmind This is an automated email from the git hooks/post-receive script. It was generated because a ref change was pushed to the repository containing the project "radmind". The branch, master has been updated via ce09d4c6f94f552b971c69affbae4bd62e8d385c (commit) from e05ef4d5de7344b196617c0cab3f6c345e6f2b3d (commit) Those revisions listed above that are new to this repository have not appeared on any other notification email; so we list those revisions in full, below. - Log ----------------------------------------------------------------- commit ce09d4c6f94f552b971c69affbae4bd62e8d385c Author: Andrew Mortensen <admorten@...> Date: Wed Jun 23 16:51:49 2010 -0400 Create longest possible path first in mkdirs routine. Apply patch from Wes. ----------------------------------------------------------------------- Summary of changes: mkdirs.c | 50 +++++++++++++++++++++++++++++++++++--------------- 1 files changed, 35 insertions(+), 15 deletions(-) hooks/post-receive -- radmind
https://sourceforge.net/p/radmind/mailman/radmind-commit/?viewmonth=201006
CC-MAIN-2017-47
refinedweb
9,016
61.22
Even though the ink is barely dry on the documentation for Active Server Pages 3.0, Microsoft is already hard at work on the next generation of their core server-side programming technology. In this chapter, we introduce this new product, and look at what it is all about. Currently called ASP+ Next Generation Web Services (though this name might yet change) we'll see why we need a new version of ASP, and explore the concepts behind its design and implementation. While this book is aimed predominantly at experienced developers who have used ASP before, we start out in this chapter by examining some of the core issues involved when you decide to migrate to ASP+. ASP+ is designed to be backwards-compatible with earlier versions of ASP, with only minor changes required in some circumstances (we explore these further in the appendices). However, more to the point, you can install ASP+ on an existing Windows 2000 server alongside ASP 3.0. This allows you to experiment with the new version, without requiring a separate 'test bed' server. You can continue using existing ASP applications, and migrate them ASP+ when you are ready, so your investment in ASP is not lost. But simply porting your applications to ASP+ will only give you a few of the benefits the new version offers. ASP+ has many new features that provide far greater ease of use, more power and better runtime efficiency, but to take advantage of them you will need to understand more about the way that ASP+ works. As we are writing this book using a preview version of ASP+, we can't be exactly sure of all the features of the final release. But thanks to the information and assistance provided by the ASP+ team at Microsoft, we can be pretty sure that the content of the book will be reliable and useful with the final version. We'll also be maintaining a special Web site that is accessible from, where we'll document changes as the beta and final release versions appear, and provide some detailed information as well. So, in this first chapter, we'll cover: We start with a look at the way that ASP and ASP+ have evolved, as this will help to set the background for understanding and working with the new product. For more information about working with COM+ and previous versions of ASP, check out Professional ASP 3.0, (ISBN 1-861002-61-0) from Wrox. Although it seems to have been around forever, Active Server Pages is only some three-and-a-bit years old. Since its inception in late 1996, it has grown rapidly to become the major technique for server-side Web programming in the Windows environment (and on some other platforms using other implementations that accept the same or similar syntax, such as ChilliASP). But it didn't come from nowhere – the foundations lie much further back than that. Traditionally, dynamic Web pages have been created using server-side executable programs. A standardized Web server interface specification called the Common Gateway Interface (CGI) allows an executable program to access all the information within incoming requests from clients. The program can then generate all the output required to make up the return page (the HTML, script code, text, etc.), and send it back to the client via the Web server. To make the programmer's life easier, and save having to create executable programs, languages such as Perl use an application that accepts text-based script files. The programmer simply writes the script, and the Web server executes it using a Perl interpreter. Microsoft introduced another Web server interface with their Web server, Internet Information Server. This is the Internet Server Application Programming Interface (ISAPI), and differs from the CGI in that it allows compiled code within a dynamic link library (DLL) to be executed directly by the Web server. As with the CGI, the code can access all the information in the client request, and it generates the entire output for the returned page. Most developments in Microsoft's Web arena have been based on the ISAPI interface. One early and short-lived product was dbWeb, a data access technology that provided a range of searching, filtering and formatting capabilities for accessing data stored on the server, and for interacting with the client. A second development was the Internet Database Connector (IDC). This proved a big hit with developers – not only because it was fast and efficient (unlike dbWeb), but also because it was a lot more generic and easier to program. IDC introduced the concept of templates, allowing programmers to easily adapt existing HTML pages to use its features and quickly build new applications around it. IDC uses two text files for each 'page'. The first is a simple script that defines the way that the data should be collected from the server-based database. In essence, it is just a SQL statement plus some configuration information: {this is the query file named getuserlist.idc} Datasource: GlobalExampleData Username: examples Password: secret Template: getuserlist.htx SQLStatement: + SELECT DISTINCT UserName + FROM Person ORDER BY UserName; The server executes this file to obtain the results recordset, then loads a template file: {this is an extract from the template file named getuserlist.htx} ... <TABLE> <TR> <TD>User name:</TD> <TD> <SELECT NAME=selUserName> <%BeginDetail%> <OPTION VALUE="<%UserName%>"><%UserName%> <%EndDetail%> </TD> </TR> </TABLE> ... The template is just an ordinary Web page, including HTML, text and other objects, but with one or more specially delimited placeholders inserted. And the syntax for these placeholders, and the other simple program code constructs that are supported, is eerily like ASP. Of course, it was from this that ASP actually evolved: So, it was in early 1996 that Denali (the codename for ASP) was released as a beta version 0.9 product, and it took the Web-development world by storm. The ability to execute code inline within a Web page was so simple and yet so powerful. With the provision of a series of components that could perform advanced features, most notably ActiveX Data Objects (ADO), it was almost child's play to create all kinds of dynamic pages. The final release version of Active Server Pages 1.0, available as an add-on for IIS 3.0, was soon in use on Windows platforms all over the place. The combination of ASP with ADO enabled developers to easily create and open recordsets from a database. There's no doubt that this was one of the main factors for its rapid acceptance, because now you could create and open recordsets from a database within the script, and manipulate and output any values, in any order, almost any way you wanted. In 1998, Microsoft introduced ASP 2.0 as part of the free Windows NT4 Option Pack. The major difference between this release of ASP and version 1.0 was in the way that external components could be instantiated. With ASP 2.0 and IIS 4.0, it is possible to create an ASP application, and within it run components in their own separate memory space (i.e. out of process). The provision of MicrosoftTransactionServer (MTS) also made it easy to build components that can partake in transactions. Early this year (2000), Windows 2000 arrived. This contains version 5.0 of IIS, and version 3.0 of ASP. Other than some minor additions to ASP, the core difference here is actually more to do with COM+. In Windows 2000, Microsoft combined MTS with the core COM runtime to create COM+. This provides a host of new features that make the use of components easier, as well as giving a much more stable, scalable and efficient execution platform. Other than a few minor changes to the management interface, IIS has not changed a great deal on the surface. However, underneath, it now uses COM+ Component Services to provide a better environment for components to be executed within, including out of process execution as the default and the option to run each component in its own isolated process if required. All this brings us to the present, with ASP+. The underlying structure of ASP+ is very different to that of previous versions, although from the 'outside' (as far as the developer is concerned) it does appear to offer a very similar interface. ASP+ is almost entirely component-based and modularized, and every page, object, and HTML element you use can be a runtime component object. For this to perform efficiently, and provide a scalable solution, the management of these objects is a very necessary prerequisite. The new runtime environment carries out this management automatically, allowing ASP+ to become far more object-oriented in nature. This lets developers build more powerful applications by accessing these component objects in a far more granular and controlled manner. On top of that, the object orientation of ASP+ provides extensibility for the environment as a whole. Developers can add to and extend the environment, both by creating new components or inheriting from the base classes that create them, and by over-riding selected behavior as required. Under the hood, the COM+ runtime manages the instantiation, pooling, and allocation of the objects automatically. So, COM+ provides a framework of operating system services. But that's not the whole story. ASP+ is actually a part of a brand new runtime framework that provides support for all kinds of applications in Windows. The framework is a key part of of Microsoft's Next Generation Web Services or NGWS. When you install this, you get ASP+ as part of the package. The NGWS framework supports all other server-side programming techniques as well, such as a new managed component service, support for building executable applications and Windows Services, access to performance counter APIs and Event Log APIs, etc. The NGWS framework extends the Component Object Model (COM) architecture that we use to create re-usable and interoperable software components by adding new and enhanced services for scalable distributed applications: We'll look at how it does all these things next. The integration of ASP into the operating system differs remarkably from earlier versions of ASP, which were basically just add-ons to the operating system. Up until now, ASP has been implemented through an ISAPI DLL named asp.dll, plus a few new system files and the ASP user components that came as part of the package (such as the Browser Capabilities component). The NGWS framework reflects the information technology industry's changing view of the needs for creating, deploying, and maintaining Web services of all types – ranging from simple client applications to the most complex distributed architectures. The overall concept and strategy is part of the Windows Distributed Internet Applications (DNA) architecture. However, the important part to recognize is that the framework is not just there for ASP+. It acts as a base for all kinds of applications to be built on Windows. The following diagram shows how the runtime framework supports ASP+ Applications: The NGWS framework provides an execution engine to run code, and a family of object oriented classes/components that can be used to build applications. It also acts as an interface between applications and the core operating system. You might ask why we need such a layer, when existing applications can talk to the core operating system and services quite easily. The reason is that it allows applications to use the operating system to best advantage, in a standard way that permits faster and simpler development – something that is increasingly necessary in today's competitive commercial environment. To achieve these aims, the runtime framework implements many of the features that the programmer, or the specific programming language environment, had to provide themselves. This includes things like automatic garbage collection, rich libraries of reusable objects to meet the needs of the most common tasks, and improved security for applications. This last point, of course, is becoming more important with the spread of networked applications – especially those that run over the Internet. However, one of the biggest advantages that the runtime framework provides is a language-neutral execution environment. All code, irrespective of the source language, is compiled automatically into a standard intermediate language (IL) – either on command or when first executed (in the case of ASP+). The runtime framework then creates the final binary code that makes up the application and executes it. The compiled IL code is used for each request until the source code is changed, at which point the cached version is invalidated and discarded. So, whether you use Visual Basic, C#, JScript, Perl or any of the other supported languages, the intermediate code that is created is (or should be) identical. And the caching of the final binary object code improves efficiency and scalability at runtime as well. C# is the new language from Microsoft especially designed for use with the Next Generation Web Services framework and ASP+. It combines the power and efficiency of C++ with the simplicity of Visual Basic and JScript. One thing that this achieves is the ability to call from one language to another, and even inherit from objects created in one language and modify them within another language. For example, you can inherit an object that is written in C# in your VB program and then add methods or properties, or over-ride existing methods and properties. In fact, parts of the framework, and the entire ASP+ object model, are now implemented internally using C# rather than C++. So, the new runtime framework introduces a true multi-language platform for programming any kind of application. As most of our current development is in the area of distributed applications, especially Internet- and Intranet-based applications, many the new features are directly aimed at this type of development. The three sections shown highlighted in the previous diagram (and repeated in the next diagram) are those that implement ASP+ itself, and which we're interested in for this book: Together, these three sections implement the Web Application Infrastructure – the topic that we are concerned with in this book. Together with the new runtime framework, it provides a range of exciting new features: As part of the ASP+ libraries, there is a host of intelligent server-based rich controls for building Web-based user interfaces quickly and simply. They can output HTML 3.2 code for down-level browsers, while taking advantage of the runtime libraries for enhanced interactivity on richer clients such as Internet Explorer 4 and above. These server-based controls can be also be reused to build controls composed of other controls, inheriting implementation and logic from these constituent controls. The NGWS framework provides a new version of ADO, called ADO+, which offers integrated services for accessing data – regardless of the format or location of that data. ADO+ presents an object-oriented view of relational data, giving developers quick and easy access to data derived from distributed sources. ADO+ also improves support for, and to some extent relies on, XML. ADO+ can automatically persist and restore recordsets (or datasets as they are now called) to and from XML. As we'll see, this is particularly useful when passing data around the Web using ASP+ Web Services. Two of the major requirements for any Web-based application are a robust operating platform, and scalability to allow large numbers of multiple concurrent requests to be handled. The NGWS runtime provides these features by allowing automatic error and overload detection to restart and manage the applications and components that are in use at any one time. This prevents errant code or memory leaks from soaking up resources and bringing the server to a halt. There are also new and updated system and infrastructure services, including automatic memory management and garbage collection, automatic persistence and marshaling, and evidence based security. Together these features provide for more scalable and reliable resource allocation and application processing. Despite all the changes to the core operating system and runtimes, great care has been taken to maintain backward compatibility with earlier versions of Windows, COM and ASP. In most cases, existing applications, COM and COM+ components, ASP pages, and other scripts and executables work under the NGWS runtime. Alternatively, you can update them in your own time as your business requirements demand. All ASP+ pages have the .aspx file extension, and this is mapped to the ASP+ runtime framework. This allows pages that have the .asp file extension to run unchanged under the existing ASP runtime. Having seen in outline how ASP+ is now an integral part of the operating system, we need to look at the other aspect. How, and why, is ASP+ different to earlier version of ASP? And just how different is it? Well, if you just want to run existing pages and applications, you probably won't notice the differences much at all. However, once you open up the ASP+ SDK or Help files, you'll see a whole new vista of stuff that doesn't look the least bit familiar. Don't panic! We'll work through the main differences next. We'll start with a look at why Microsoft has decided that we need a new version of ASP, and how it will help you, as a developer, to meet the needs of the future when creating Web sites and applications. We'll follow this with a checklist of the major new features of ASP+, then examine each one in a little more detail. The remainder of the book then covers the new features one by one, explaining how you can use them. In the Introduction to this book, we listed the main motivations that Microsoft had when designing and developing ASP+. After all, considering that ASP has been so successful, why do we need a new version? There are really four main issues to consider: Besides all of this, the rapidly changing nature of distributed applications requires faster development, more componentization and re-usability, easier deployment, and wider platform support. New standards such as the Simple Object Access Protocol (SOAP), and new commercial requirements such as business-to-business (B2B) data interchange, require new techniques to be used to generate output and communicate with other systems. Web applications and Web sites also need to provide a more robust and scalable service, which ASP+ provides through proactive monitoring and automatic restarting of applications when failures, memory leaks, etc. are discovered. So, to attempt to meet all these needs, ASP has been totally revamped from the ground up into a whole new programming environment. While there are few tools available to work with it just yet, Visual Studio 7.0 will be providing full support to make building ASP+ applications easy (both ASP+ Pages and ASP+ Services). The rich, component based, event driven programming model is specifically designed to be 'tools friendly', and this support will be available for all Visual Studio languages – including VB, C++, and C#. And you can be sure that third party tool manufacturers will not be far behind. The biggest challenges facing the Web developer today must be the continued issues of browser compatibility, and the increasing complexity of the pages that they have to create. Trying to build more interactive pages that use the latest features of each browser, whilst still making sure that the pages will work on all the popular browsers, is a nightmare that refuses to go away. And, of course, it will only get worse with the new types of Internet device that are on the way, or here already. In particular, trying to build pages to offer the same user-level capability to cellular phones as to traditional browser clients is just about impossible. The text-only 12-character by 3-line display of many cellular phones does tend to limit creativity and user interaction. One obvious solution is to create output that is targeted at each specific client dynamically – or create multiple versions of the same site, one for each type of client. The second option is not attractive, and most developers would prefer the first one. However, this implies that every hit from every user will require some server-side processing to figure out what output to create. If this is the case, why not automate much of the process? To this end, ASP+ introduces the concept of server controls that encapsulate common tasks and provide a clean programming model. They also help to manage the targeting of all the different types of client. ASP has always provided the opportunity to execute components on the server, and these components can generate sections of the page that is returned to the user. ASP+ extends this concept through server controls. All that's required to turn any HTML element into a server control is the addition of an extra attribute: runat="server". Any HTML element in a page can be marked this way, and ASP+ will then process the element on the server and can generate output that suits this specific client. And, as a by-product, we can do extra tricks – in particular with HTML <form> and the associated form control elements, where we can create the code to manage state during round trips to the server. This makes the programming experience less monotonous and dramatically more productive. While the concept of having HTML elements that execute on the server may at first seem a little strange, as you'll see it adds a whole new layer of functionality to the pages, and makes them easier to write at the same time. What more could a programmer want? One of the most cumbersome tasks when creating interactive Web sites and applications is managing the values passed to the server from HTML form controls, and maintaining the values in these controls between page requests. So one of the core aims of ASP+ is to simplify this programming task. This involves no extra effort on the part of the programmer, and works fine on all browsers that support basic HTML and above. Take a look at the following section of code. This creates a simple form using HTML controls where the user can enter the name of a computer and select an operating system. OK, so this isn't a terribly exciting example in itself, but it illustrates a pretty common scenario used by almost every web application out there today. When the form page is submitted to the server, the values the user selected are extracted from the Request.Form collection and displayed with the Response.Write method. The important parts of the page are highlighted in the code listing: "> <p /> Operating System: <select name="selOpSys" size="1"> <option>Windows 95</option> <option>Windows 98</option> <option>Windows NT4</option> <option>Windows 2000</option> <p /> <input type="submit" value="Submit"> </form> </body> </html> Although this is an ASP page (the file extension is .asp rather than .aspx), it will work just the same under ASP+ if we changed the extension to .aspx. Remember that the two systems can quite freely co-exist on the same machine, and the file extension just determines whether ASP or ASP+ processes it. This screenshot shows what it looks like in Internet Explorer 5. When the user clicks the Submit button to send the values to the server, the page is reloaded showing the selected values. Of course, in a real application, some the values would probably be stored in a database, or be used to perform some application-specific processing – for this example we're just displaying them in the page: One problem is that the page does not maintain its state, in other words the controls return to their default values. The user has to re-enter them to use the form again. You can see this is the next screenshot: To get round this situation, we have to add extra ASP code to the page to insert the values into the controls when the page is reloaded. For the text box, this is just a matter of setting the value attribute with some inline ASP code, using the HTMLEncode method to ensure that any non-legal HTML characters are properly encoded. However, for the <select> list, we have to do some work to figure out which value was selected, and add the selected attribute to that particular <option> element. The changes required are highlighted below: " value="<% = Server.HTMLEncode(Request("txtName")) %>"> <p /> Operating System: <select name="selOpSys" size="1"> <option <% If strOpSys = "Windows 95" Then Response.Write " selected" %> >Windows 95</option> <option <% If strOpSys = "Windows 98" Then Response.Write " selected" %> >Windows 98</option> <option <% If strOpSys = "Windows NT4" Then Response.Write " selected" %> >Windows NT4</option> <option <% If strOpSys = "Windows 2000" Then Response.Write " selected" %> >Windows 2000</option> <p /> <input type="submit" value="Submit"> </form> </body> </html> Now, when the page is reloaded, the controls maintain their state and show the values the user selected: This page, named pageone.asp, is in the Chapter01 directory of the samples available for the book. You can download all the sample files from our Web site at. So, how does ASP+ help us in this commonly met situation? The next listing shows the changes required for taking advantage of ASP+ server controls that automatically preserve their state. We still use the Response.Write method to display the selected values. However, this time some of the elements on the page have the special runat="server" attribute added to them. When ASP+ sees these elements, it processes them on the server and creates the appropriate HTML output for the client: <html> <body> <% If Len(Request.Form("selOpSys")) > 0 Then strOpSys = Request.Form("selOpSys") strName = Request.Form("txtName") Response.Write("You selected '" & strOpSys _ & "' for machine '" & strName & "'.") End If %> "> </form> </body> </html> You can clearly see how much simpler this ASP+ page is than the last example. When loaded into Internet Explorer 5 and the values submitted to the server, the result appears to be just the same: This page, named pageone.aspx, is in the Chapter01 directory of the samples available for the book. You can download all the sample files from our Web site at. How is this achieved? The key is the runat="server" attribute. To get an idea of what's going on, take a look at the source of the page from within the browser. It looks like this: <html> <body> You selected 'Windows 98' for machine 'tizzy'. <FORM name="ctrl0" method="post" action="pageone.aspx" id="ctrl0"> <INPUT type="hidden" name="__VIEWSTATE" value="a0z1741688109__x"> Machine Name: <INPUT type="text" id="txtName" name="txtName" value="tizzy"> <p /> Operating System: <SELECT id="selOpSys" size="1" name="selOpSys"> <OPTION value="Windows 95">Windows 95</OPTION> <OPTION selectedWindows 98</OPTION> <OPTION value="Windows NT4">Windows NT4</OPTION> <OPTION value="Windows 2000">Windows 2000</OPTION> <p /> <input type="submit" value="Submit"> </FORM> </body> </html> We wrote this ASP+ code to create the <form> in the page: <form runat="server"> ... </form> When the page is executed by ASP+, the output to the browser is: <FORM name="ctrl0" method="post" action="pageone.aspx" id="ctrl0"> ... </FORM> You can see that the action and method attributes are automatically created by ASP+ so that the values of the controls in the form will be POSTed back to the same page. ASP+ also adds a unique id and name attribute to the form as we didn't provide one. However, if you do specify these, the values you specify will be used instead. If you include the method="GET" attribute, the form contents are sent to the server as part of the query string instead, as in previous versions of ASP, and the automatic state management will no longer work. Inside the form, we wrote this ASP+ code to create the text box: <input type="text" id="txtName" runat="server"><P></P> The result in the browser is this: <INPUT type="text" id="txtName" name="txtName" value="tizzy"> You can see that ASP+ has automatically added the value attribute with the text value that was in the control when the form was submitted. It has also preserved the id attribute we provided, and added a name attribute with the same value. For the <select> list, we wrote this code: <select id="selOpSys" size="1" runat="server"> <option>Windows 95</option> <option>Windows 98</option> <option>Windows NT4</option> <option>Windows 2000</option> ASP+ obliged by outputting this HTML, which has a selected attribute in the appropriate <option> element: <SELECT name="selOpSys" id="selOpSys" size="1"> <OPTION value="Windows 95">Windows 95</OPTION> <OPTION selectedWindows 98</OPTION> <OPTION value="Windows NT4">Windows NT4</OPTION> <OPTION value="Windows 2000">Windows 2000</OPTION> Again, a unique id attribute has been created, and the <option> elements have matching value attributes added automatically. (If we had provided our own value attributes in the page, however, these would have been preserved.) The other change is that ASP+ has automatically added a HIDDEN-type control to the form: <INPUT type="hidden" name="__VIEWSTATE" value="a0z1741688109__x"> This is how ASP+ can store ambient state changes of a page across multiple requests – i.e. things that don't automatically get sent back and forth between the browser and server between Web requests. For example, if the background color of a server control had been modified it would use the VIEWSTATE hidden field to remember this between requests. The VIEWSTATE field is used whenever you post back to the originating page. In Chapter 2, we discuss this topic in more detail. So, as you can see, there really aren't any 'magic tricks' being played. It's all standard HTML, with no client-side script libraries, and no ActiveX controls or Java applets. An equally important point is that absolutely no state is being stored on the server. Instead, values are simply posted to the server using standard methods. Values are preserved and maintained across requests simply by the server controls modifying the HTML before the pages are sent to the client. To display the values in the page, we used code that is very similar to that we used in the ASP example earlier on: ... If Len(Request.Form("selOpSys")) > 0 Then strOpSys = Request.Form("selOpSys") strName = Request.Form("txtName") Response.Write("You selected '" & strOpSys _ & "' for machine '" & strName & "'.") End If ... However, one of the other great features of ASP+ and server controls is that they are available to the code running on the server that is creating the page output. The ASP+ interpreter insists that each one has a unique id attribute, and therefore all the server controls (i.e. the elements that have the runat="server" attribute) will be available to code against. This means that we no longer have to access the Request collection to get the values that were posted back to the server from our form controls – we can instead refer to them directly using their unique id: ... If Len(selOpSys.value) > 0 Then Response.Write("You selected '" & selOpSys.value _ & "' for machine '" & txtName.value & "'.") End If ... In the ASP page we've just seen, the script was assumed to be VBScript (we didn't specify this, and VBScript is the default unless you change the server settings). In ASP+, there is no support for VBScript. Instead, the default language is Visual Basic ("VB"), which is a superset of VBScript. So, our code is being compiled into IL and executed by the runtime. The compiler and runtime for Visual Basic that is included with ASP+ is the new version 7.0 (good news – you don't need to buy a separate copy!). There are a few implications in this, which we summarize in Appendix B of this book. The most important thing to note straight away is that all method calls in VB7 must have the parameter list enclosed in parentheses (much like JScript and JavaScript). In VBScript and earlier versions of VB, this was not required – and in some cases produced an error. You can see that we've enclosed the parameter to the Response.Write method in parentheses in our example. Secondly, VB7 has no concept of 'default' methods or 'default' properties, so we now must provide the method or property name. You'll probably come across this first when working with ADO recordsets where the syntax must be: fieldvalue = objRecordset.Fields("fieldname").value Of course, if we are going to have HTML elements that execute on the server, why not extend the concept even more? ASP+ changes each page into a server-side object, and exposes more properties, methods and events that can be used within your code to create the content dynamically. Each page becomes a tree of COM+ objects that can be accessed and programmed individually as required. To see how we can take advantage of this to structure our pages more elegantly, take a look at the following code. It shows the ASP+ page we used in our previous example, but with a couple of changes. This version of the page is named pagetwo.aspx: <html> <body> <script language="VB" runat="server"> Sub ShowValues(Sender As Object, Args As EventArgs) divResult.innerText = "You selected '" _ & selOpSys.value & "' for machine '" _ & txtName.value & "'." End Sub </script> <div id="divResult" runat="server"></div> " runat="server" onserverclick="ShowValues"> </form> </body> </html> Firstly, notice that we've replaced the inline ASP+ code with a <script> section that specifies VB as the language, and includes the runat="server" attribute. Inside it, we've written a Visual Basic function named ShowValues. In ASP+, functions and subroutines must be placed inside a server-side <script> element, and not in the usual <%...%> script delimiters – we'll look at this in more detail in the next chapter. We've also added an HTML <div> element to the page, including the runat="server" attribute. So this element will be created on the server, and is therefore available to code running there. When the VB subroutine is executed, it sets the innerText property of this <div> element. Notice also how it gets at the values required (e.g. those submitted by the user). Because the text box and <select> list also run on the server, our code can extract the values directly by accessing the value properties of these controls. When the page is executed and rendered on the client, the <div> element that is created looks like this: <div id="divResult">You selected 'Windows NT4' for machine 'lewis'.</div> By now you should be asking how the VB subroutine actually gets executed. Easy – in the <input> element that creates the Submit button, we added two new attributes: <input type="submit" value="Submit" runat="server" onserverclick="ShowValues"> The runat="server" attribute converts the HTML element into a server-side control that is 'visible' and therefore programmable within ASP+ on the server. The onserverclick="ShowValues" attribute then tells the runtime that it should execute the ShowValues subroutine when the button is clicked. Notice that the server-side event names for HTML controls include the word "server" to differentiate them from the client-side equivalents (i.e. onclick, which causes a client-side event hander to be invoked). The result is a page that works just the same as the previous example, but the ASP+ source now has a much more structured and 'clean' format. It makes the code more readable, and still provides the same result – without any client-side script or other special support from the browser. If you view the source code in the browser, you'll see that it's just the same: This page, named pagetwo.asp, is in the Chapter01 directory of the samples available for the book. You can download all the sample files from our Web site at. You'll see how we can improve the structure even more, by separating out the code altogether, in Chapter 2. And, even better (as with earlier versions of ASP) we can add our own custom components to a page or take advantage of a range of server-side components that are provided with the NGWS framework. Many of these can be tailored to create output specific to the client type, and controlled by the contents of a template within the page. As a by-product of the modularization of ASP+, developers can also access the underlying runtime framework if they need to work at a lower level than the ASP+ page itself. As well as the information made available through the traditional ASP objects such as Form, QueryString, Cookies and ServerVariables, developers can also access the underlying objects that perform the runtime processing. These objects include the entire page context, the HTTPModules that process the requests, and the RequestHTTPHandler objects. It is also possible to access the raw data streams, which are useful for managing file uploads and other similar specific tasks. We look at this whole topic in detail in Chapter 6. When a page or Web service is first activated by the client, ASP+ dynamically compiles the code, caches it, and then reuses this cached code for all subsequent requests until the original page is changed – at which point the compiled code version is invalidated and removed from the cache. You can see this as a delay the first time that an ASP+ page is executed, while the response to subsequent requests is almost instant. Because the compilation is to the intermediate language (rather than the processor-level binary code), any language can be used as long as the compiler outputs code in this intermediate language. In fact, a number of independent vendors are already working on different languages (including Cobol). And, because the intermediate language code is common across languages, each language can inherit from all others and call routines that were originally written in all the other languages. The efficient component management services provided by the runtime also ensure that the compiled code in the page executes much more efficiently than would be possible using the earlier versions of ASP. The major features that ASP+ provides over earlier versions of ASP are: Features such as the intrinsic Request and Response objects (and the Form, QueryString, Cookies, and ServerVariables collections that they implement) are compatible with earlier versions of ASP. However, they have gained a lot of new properties and methods that make it easier to build applications. There is also access to the ObjectContext object for use by any existing ASP components. However, there are some new methods and properties available for the intrinsic ASP objects, as well as other issues that affect how existing pages, components and applications perform under ASP+. See Appendix A for more details. ASP+ Pages are described in detail in Chapters 2, 3 and 4. The four big advantages that ASP+ Pages provide are: ASP+ provides a series of new server controls that can be instantiated within an ASP+ page. From the developer's point of view, the advantage of using these controls is that server-side processing can be carried out on events raised by client-side controls. The server controls provided with ASP+ fall into four broad categories or 'families': All of these controls are designed to produce output that can run on any Web browser (you'll see this demonstrated in several places within the book). There are no client-side ActiveX controls or Java applets required. We'll look at each of these control types in more detail next. In the example we looked at earlier in this chapter, we saw how ASP+ provides a series of Intrinsic Controls that are intelligent. In other words, they can be executed on the server to create output that includes event handling and the maintenance of state (the values the controls display). In Chapters 2, 3, and 4, we look at how we can use these controls in more detail, and explore their various capabilities. However, to overview the aims of the new ASP+ Intrinsic Controls, we can say that they serve three main purposes: The basic intrinsic controls are used by simply inserting the equivalent HTML into the page, just as you would in earlier versions of ASP, but adding the runat="server" attribute. The elements that are implemented as specific objects in the preview version of ASP+ are: <table> <tr> <th> <td> <form> <input> <textarea> <button> <a> <img> As in HTML, the <input> server control depends on the value of the type attribute. The output that the control creates is, of course, different for each value. All other HTML elements in an ASP+ page that are marked with the runat="server" attribute are handled by a single generic HTML server control. It creates output based simply on the element itself and any attributes you provide or set server-side when the page is being created. There is also a set of new ASP+ controls that can be defined within the page, and which are prefixed with the namespace 'asp'. These controls expose properties that correspond to the standard attributes that are available for the equivalent HTML element. As with all server controls, you can set these properties during the server-side Load events of the page, or add them as attributes in the usual way, but using the special property names. When rendered to the client, the properties are converted into the equivalent HTML syntax. For example, to create an instance of a ListBox control, we can use: <asp:ListBox <asp:ListItem>Windows 98</asp:ListItem> <asp:ListItem>Windows NT4</asp:ListItem> <asp:ListItem>Windows 2000</asp:ListItem> </asp:ListBox> At runtime (in the preview version) the ASP+ code above creates the following HTML, and sends it to the client: <SELECT name="ListBox0" size="3"> <OPTION value="Windows 98">Windows 98</OPTION> <OPTION value="Windows NT4">Windows NT4</OPTION> <OPTION value="Windows 2000">Windows 2000</OPTION> A summary of the 'asp'-prefixed intrinsic controls looks like this: ASP+ Intrinsic Control HTML Output Element <asp:Button> <input type="submit"> <asp:LinkButton> <a href="jscript:__doPostBack(...)">...<a> <asp:ImageButton> <input type="image"> Table continued on following page <asp:HyperLink> <a href="...">...</a> <asp:TextBox> <input type="text" value="..."> <asp:CheckBox> <input type="checkbox"> <asp:RadioButton> <input type="radio"> <asp:DropDownList> <select>...</select> <asp:ListBox> <select size="...">...</select> <asp:Image> <img src="..."> <asp:Label> <span>...</span> <asp:Panel> <div>...</div> <asp:Table> <table>...</table> <asp:TableRow> <tr>...</tr> <asp:TableCell> <td>...</td> These controls provide more standardized property sets than the HTML controls, and make it easier to implement tools that can be used for designing and building ASP+ pages and applications. Many day-to-day tasks involve the listing of data in a Web page. In general, this data will be drawn from a data store of some type, perhaps a relational database. The aim of the ASP+ List Controls is to make building these kinds of pages easier. This involves encapsulating the functionality required within a single control or set of controls, saving development time and making the task easier. The server-side ASP+ controls that generate the user interface can also manage tasks such as paging the list, sorting the contents, filtering the list, and selecting individual items. Finally, they can use the new server-side data binding features in ASP+ to automatically populate the lists with data. The three standard controls that are used to create lists are the Repeater, DataList, and DataGrid controls. The Repeater control is the simplest, and is used simply to render the output a repeated number of times. The developer defines templates that are used to apply styling information to the header, item, footer and separator parts of the output that is created by the control. To create a table, for example, the header and footer information is used (as you would expect) for the header and footer of the table (the <thead> and <tfoot> parts in HTML terms). The item template content is applied to every row of the table that is generated from the repeated values – probably the records from a data store of some type. There is also the facility to use an alternatingItem template to apply different styles to alternate rows. The separator information defines the HTML output that will be generated after each row and before the next one. The DataList control differs from the Repeater control in that it provides some intrinsic formatting of the repeated data as well as accepting the same kinds of template values as the Repeater control. The DataList control renders additional HTML (outside of that defined in its templates) to better control the layout and format of the rendered list, providing features such as vertical/horizontal flow and style support. However, unlike the Repeater control, it can also be used to edit the values in the elements on demand, and detect changes made by the user. The richest of the list controls is the DataGrid control. The output is an HTML table, and the developer can define templates that are used to apply styling information to the various parts of the table. As well as being able to edit the values in the table, the user can also sort and filter the contents, and the appearance is similar to that of using a spreadsheet. However, as the output from the control is simply HTML, it will work in all the popular browsers. There are two other more specialized types of list control included with ASP+. These are the RadioButtonList and CheckboxList controls. In effect, they simply render on the client a list of HTML radio button or checkbox elements, with captions applied using span elements. However, you can easily specify the layout, i.e. if the items should appear listed horizontally across the page or vertically down it, or within an HTML table. You can also control the alignment of the text labels, and arrange for the list to automatically post back the selected values. The preview version of ASP+ ships with three Rich Controls that provide specific functions not usually available in plain HTML. Examples of these are the Calendar and AdRotator controls. Also due in later releases are TreeView, ImageGenerator, and other controls. To give you some idea of what these custom controls can do, take a look at the screenshot below: This was generated using this simple code: <form runat="server"> <asp:Calendar </form> One common task when working with HTML forms in a Web application is the validation of values that the user enters. They may have to fall within a prescribed range, be non-empty, or even have the values from several controls cross-referenced to check that the inputs are valid. Traditionally, this has been done using either client-side or server-side scripts, often specially written for each form page. In ASP+, a range of Validation Controls is included. These make it easy to perform validation checks – both client-side and server-side. Five types of validation control are provided. The RequiredFieldValidator control, CompareValidator control, RangeValidatorcontrol, and RegularExpressionValidator control perform various types of checks to ensure that that a control contains a specific value, or matches the value in another control. The CustomValidator control passes the value that a user enters into a control to a specified client-side or server-side function for custom validation. The ValidationSummary control collects all the validation errors and places them in a list within the page. Each one of these controls (except the ValidationSummary control) is linked to one or more HTML controls through the attributes you set for the validation control. They can automatically output the text or character string you specify when the validation fails. You can also use the IsValid method of the Pageobject to see if any of the validation controls detected an error, and provide custom error messages. Some of the controls also detect the browser type, and can output code that performs validation client-side without requiring a round-trip to the server. If this is not possible, code to do the validation during the submission of the values to the server is output instead. We look in detail at how we can use these controls in Chapter 4. You'll see most of the controls we've discussed described in more detail in Chapter 2. The more complex topics associated with them, such as templates, server-side data binding and ADO+ are covered in Chapter 3. Chapter 4 looks at the validation controls, and other advanced techniques in ASP+ pages. We also look at how you can build your own custom server-side controls in Chapter 7. As the march of XML through traditional computing territory continues unabated, more and more of the ways that we are used to doing things are changing. One area is the provision of programmatic services that can be consumed by remote clients, especially where the server and client are running on different operating system platforms. The Simple Object Access Protocol (SOAP) is an XML grammar that allows clients to take advantage of the services provided by a remote server or application, by providing the requests in a standard format. ASP+ includes support for the creation of suitable server or application objects that accept SOAP requests, and return the results in SOAP format. The technology is called Web Services, and allows developers to create these objects quickly and easily within the .NET framework. These objects are also automatically discoverable and described to clients using the XML-based Service Description Language (SDL). You simply create the object using normal server-side code as a publicclass, in any of the supported languages, and include one or more methods that the client can access marked with the special [WebMethod] indicator. These are known as custom attributes. Other methods that are not marked as such are not exposed, and cannot be accessed by clients. No knowledge of COM+ or HTTP itself is required, and the source for these service objects is just text files. Web Services allows custom business service objects to be created quickly and easily by ASP+ developers. The client can access them synchronously or asynchronously, using the HTTP-GET, HTTP-POST or HTTP-SOAP methods that provide extra flexibility. As with other objects in ASP+, the source is compiled, cached and executed under the runtime. We explore the whole concept of Web Services in Chapter 5. In earlier versions of ASP, a file named global.asa could exist in the root directory of a virtual application. This file defined global variables and event handlers for the application. However, all other configuration details for the entire Web site were made in the Internet Services Managersnap‑in to the MMC. The settings made in Internet Services Manager are stored in the IIS metabase, which is a server-based machine-readable file that specifies the whole structure of the Web services for that machine. This has at least one major disadvantage. It requires the administrator or developer to access the metabase using either Internet Services Manager (it can be used remotely to access another machine on the LAN), the equivalent HTML pages, or custom pages that access the metabase through the Active Directory Services Interface (ADSI). In ASP+, all the configuration details for all Web applications are kept in human-readable files named config.web. The default config.web file is in the ProgramFiles\ComPlus\v2000.14.1812\ directory and this specifies the settings that apply to any applications or directories that do not over-ride the defaults. The standard format for the configuration files is XML, and each application inherits the settings in the default config.web file. The config.web file specifies a whole range of settings for the application, including the HTTP Modules and Request Handlers that are to be used to handle each request. This provides a completely extensible and flexible architecture, allowing non-standard HTTP protocol handling to be carried out if required. We examine configuration files and their use in Chapter 6. As in ASP 2.0 and 3.0, it is also possible to use a definition file that specifies the actions to take when an application starts and ends, and when individual user sessions start and end. This file is named global.asax (note the .asax file extension), and is stored in the root directory for each application. The existing ASP event handlers Application_OnStart, Application_OnEnd, Session_OnStart, and Session_OnEnd are supported in global.asax, as well as several new events such as Application_BeginRequest, Security_OnAuthenticate, and others. And, as before, the global.asax file can be used to set the values of global or session-level variables and instantiate objects. We look at the use of global.asax files in Chapter 6. One of the useful features in previous versions of ASP that developers were quick to take advantage of was the provision of global and user-level scope for storing values and object instances. This uses the Application and Session objects in ASP, and these objects are still present in ASP+. Although backwards compatible, however, the new Application and Session objects offer a host of extra features. As in previous versions, each ASP application running on a machine has a single instance of the Application object, which can be accessed by any pages within that application. Any values or object references remain valid as long as the application is 'alive'. However, when the global.asax file is edited, or when the last client Session object is destroyed, the Application object is also destroyed. Using the Application object for storing simple values is useful, and there are the usual Lock and Unlock methods available to prevent users from corrupting values within it through concurrent updates to these values. This can, however, cause blocking to occur while one page waits to access the Application object while it is locked by another page. Note that when a page finishes executing or times out, or when an un-handled error occurs, the Application object is automatically unlocked for that page. Bear in mind the impact of storing large volumes of data in an Application object, as this can absorb resources that are required elsewhere. You also need to ensure that any objects you instantiate at Application-level scope are thread safe and can handle multiple concurrent accesses. Another limitation in the use of the Application object is that it is not maintained across a Web farm where multiple servers handle user requests for the same application, or in a 'Web garden' where the same application runs in multiple processes within a single, multi-processor machine. While the Application object is little changed from earlier versions of ASP, the Session object in ASP+ has undergone some quite dramatic updating. It is still compatible with code from earlier versions of ASP, but has several new features that make it even more useful. The biggest change is that the contents of the Session object can now (optionally) be stored externally from the ASP+ process, in a new object called a Session State Store. It is managed by a new Windows service called the State Server Process, and this persists the content of all users Session objects – even if the ASP+ process they are running under fails. It also removes the content of sessions that have terminated through a time-out or after an error. Alternatively, the Session content can be serialized out into a temporary SQL Server database table, where ASP+ talks directly to SQL Server. This means that it can be reloaded after an application or machine failure, providing a far more robust implementation of Session object storage than previous versions. It's therefore ideal for applications that require a session-level state management feature, for example a shopping cart. This new state storage system also has another direct advantage. When using a Web farm to support a large-scale application or Web site it has always been a problem managing sessions, as they are not available across the machines in the Web farm. The new ASP+ Session State Store can be partitioned across multiple servers, so that each client's state can be maintained irrespective of which server in the Web farm they hit first. At last, ASP+ also allows session state to be maintained for clients that don't support cookies. This was proposed in earlier versions of ASP, but never materialized. Now, by using a special configuration setting, you can force ASP+ to 'munge' the session ID into the URL. This avoids all of the previously encountered problems of losing state information, for example when the URLs in hyperlinks do not use the same character case as the page names. Finally, the State Server Process will also be able to expose information about the contents of the Session State Store, which will be useful for performance monitoring and administrative tasks such as setting maximum limits for each client's state. We examine the way that sessions can be managed in Chapter 6. Error handling and debugging has long been an area where ASP trailed behind other development environments like Visual Basic and C++. In ASP+, there are several new features that help to alleviate this situation. It's now possible to specify individual error pages for each ASP+ page, using the new ErrorPage directive at the start of an ASP+ page: <%@Page ErrorPage="/errorpages/thispage.aspx"%> If a 'Not Found', 'Access Forbidden' response is generated, or an 'Internal Server Error' (caused by an ASP code or object error) occurs while loading, parsing, compiling or processing the page, the custom error page you specify is loaded instead. In this page, you can access the error code, the page URL, and the error message. The custom error page, or other error handler, can alternatively be specified in the config.web file for an application. If no error page is specified then ASP+ will load its own error page, which contains far more detail than before about the error and how to rectify it. Settings in the config.web file also allow you to specify that this information should only be presented to a browser running on the Web server, and in this case remote users will just receive a simple error indication. Alternatively, you can create a procedure in the ASP+ page that captures the HandleError event, and doing so also prevents the default error page from being displayed. Another welcome new feature is that Visual Basic now supports the try...catch...finally error handling technique that has long been the mainstay in languages like C++, and more recently in JScript. More details of this can be found in Appendix B. ASP+ also improves debugging techniques by including a new Debugger tool. This is the same debugger that is supplied with Visual Studio, allowing local and remote debugging. Finally, ASP+ now includes comprehensive tracing facilities, so you no longer need to fill your pages with Response.Write statements to figure out what's going on inside the code. Instead, by turning on tracing at the top of the page with the new Trace directive, you can write information to the Trace object and have it rendered automatically as an HTML table at the end of the page. Tracing can also be enabled for ASP+ applications by adding instructions to the config.web file. The trace information automatically includes statistics that show the response time and other useful internal parameters: On top of this, a Web-based viewer is provided that allows you to examine the contents of theTraceContext object's log file. We look at the whole topic of error handling, debugging and tracing in Chapter 4. To finish off this brief tour of the new features in ASP+, we'll look at some other topics that fall under the 'general' heading. These include security management, sending e-mail from ASP+ pages, and server-side caching. In Chapters 4 and 6, you'll see how ASP+ implements several new ways to manage security in your pages and applications. As in ASP 3.0, the Basic, Digest and NTLM (Windows NT) authentication methods can be used. These are implemented in ASP+, using the services provided by IIS in earlier versions of ASP. There is also a new authentication technique called Passport Authentication, which uses the new Managed Passport Profile API. It's also possible to assign users to roles, and then check that each user has the relevant permission to access resources using the IsCallerinRolemethod. An alternative method is to use custom form-based authentication. This technique uses tokens stored in cookies to track users, and allows custom login pages to be used instead of the client seeing the standard Windows Login dialog. This provides a similar user experience to that on amazon.com and yahoo.com. Without this feature, you need to write an ISAPI Filter to do this – with ASP+ it becomes trivially simple. ASP+ uses server-side caching to improve performance in a range of ways. As well as caching the intermediate code for ASP pages and various other objects, ASP has an output cache that allows the entire content of a page to be cached and then reused for other clients (if it is suitable). There is also access to a custom server-side cache, which can be used to hold objects, values, or other content that is required within the application. Your pages and applications can use this cache to improve performance by storing items that are regularly used or which will be required again. The cache is implemented in memory only (it is not durable in the case of a machine failure), and scavenging and management is performed automatically by the operating system. Having seen what ASP+ is all about, and some details of the technologies that support it behind the scenes, it's time to get your hands dirty and build some applications. You can download the sample files for this book to run on your own server, and modify and extend them yourself. But first, if you haven't already done so, you must install ASP+. The latest version of ASP+ can be downloaded from the Microsoft Web site. At the time of writing, the exact location of the download was unknown, but you can reach it via our support website at. It is also available as a CD for a minimal cost, and is part of Visual Studio 7. As for tools, at the time of writing we are using the usual ASP developer's friend, Windows NotePad. Of course, you can continue to use Visual InterDev or any other development tool you wish that supports ASP – it just won't be much help with the new object syntax and server-side controls in ASP+. But as long as it doesn't mangle any code that it does not recognize, it will be fine until better tools become available. And, if like us you're a confirmed 'simple text editor' ASP developer, you might like to try one of the alternatives to Windows NotePad that offers extra features. Our current favorite is TextPad (). Installing ASP+ is just a matter of running the executable setup file. However, you should ensure that you have installed Internet Explorer version 5.5 first. If not, download it or install it directly from. Make sure that you close all other applications before installing IE 5.5, as it updates many of the Windows 2000 operating system files. Once installation is complete, you are ready to run ASP+. No other configuration is required at the moment, as the default configuration will do nicely for our first experimental efforts. In ASP 2.0 and 3.0, it's necessary to take some definite actions to create an ASP application, especially if you want to run any components that the application uses in a separate process. The good news is that, with ASP+, none of this is actually required. And you don't have to register any ASP+ components either. As we saw earlier, a file named config.web controls the configuration of an ASP+ application. It is stored in the root folder of that application. However, there is a default config.web file (automatically installed in your ProgramFiles\COM20SDK\ folder when you install the runtime) that is used for all ASP+ applications. So, all you have to do to get started is create a subdirectory under your InetPub\WWWRoot folder and place your ASP+ pages there. Of course, you can still create a folder outside the WWWRoot directory, and set up a virtual directory to point to it in the Internet Services Manager if required (as in previous versions of ASP). There is no need to set any of the configuration options in the ApplicationSettings section of the Properties dialog for this application, or in the Configuration dialog – the default settings will work fine: Later, you can add a config.web file and a global.asax file to the application's root folder if required to specify the configuration settings and application-level event handlers. Once you've installed the ASP+ runtime framework (and Internet Explorer 5.5 for the preview version of ASP+), you can try it out. An easy way to confirm that it's working is to run one of the sample files we provide. The simple example page named pageone.aspx that we looked at earlier is included in the Chapter01 folder of the samples for this book (available from). Simply copy it to the InetPub\WWWRoot directory on your server and open it from a browser using the URL or. You should get this: We've used Netscape Navigator 6 and Opera 4 here to prove that the page doesn't depend on the unique capabilities of Internet Explorer. If the page doesn't work, check out the 'read me' text file that comes with ASP+ for late-breaking information. Alternatively, have a look at the SDK documentation provided with ASP+, or available at the Microsoft Web site, to see a full description and the remedy for any error message that you get. Once you are up and running, the next step is to take a look at the Quick Start tutorials. There are examples of all kinds of ASP+ pages, Web services, and applications that you can try out and view the source code. Open the samples from or: Obviously, the preview version of ASP+ and the runtime framework that we are using is not absolutely complete. However, it is classed as being 'feature complete', which means that only minor changes and additions are expected between now and the final release. In this last section, we'll examine some of the things that you can expect to see in the final release that are not available, or that aren't yet working properly. The final version of the NGWS framework and ASP+ is aimed at all of the current and recent Windows platforms, including Windows 2000, Windows NT4, Windows 95 and Windows 98. The preview release, however, is only designed for use on Windows 2000 Server and Windows 2000 Professional. The versions for Windows 95 and Windows 98 will be limited-functionality 'personal' versions, but will allow these operating systems to provide a local source for the execution of ASP+ pages. This will be useful for building applications designed for running locally. At the moment, the output generated by the server-side ASP+ controls is basic HTML 3.2, and is not XHTML compliant. Good coding practice suggests that all Web pages should be compliant with the new XHTML recommendations from the World Wide Web Consortium (W3C), so as to allow them to be manipulated if required by an XML parser or other application that expects content to be well-formed in XML terms. A complete specification of XHTML version 1.0 can be obtained from the W3C Web site at, and Microsoft will attempt to generate XHTML-compliant HTML code from server-side components in the final release of ASP+. However, as some popular browsers can behave oddly when confronted with XHTML, the final level of support is difficult to judge at the moment. Most of the intelligent server-side controls supplied in the preview version of ASP+ only output standard HTML 3.2. However, some (such as the validation controls we look at in Chapter 4) do detect Internet Explorer 4 and above, and generate output that takes advantage of the DHTML capabilities of this browser. This provides better performance and a better user experience, as it dramatically reduces the need for round-trips to the server each time the user changes the selected data in the control. In the later beta and release versions of ASP+, there will be more controls of this type. There will also be controls aimed at creating output in different formats entirely, for example Wireless Markup Language (WML). This might be a separate set of controls in some cases; however, due to the extreme incompatibilities between the user interfaces and client capabilities for these types of Internet device. Finally, the release version of ASP+ will include administration tools allowing you to configure and maintain applications more easily. You can expect to see tools to manage the config.web configuration files and global.asax application files. There should also be graphical interfaces for viewing application performance, and examining detailed trace information while debugging complete applications. In this chapter, we've attempted to provide a complete overview of what is new and what has changed in ASP+, compared to earlier versions of ASP. ASP+ is the new generation of Microsoft's successful Active Server Pages technology, and represents a real advance in ease of use and power. ASP+ is designed to remove many of the existing limitations of ASP, such as the dependence on script languages, poor support for object-oriented design and programming techniques, and the need to continuously re-invent techniques for each page or application you build. Instead, ASP+ combines the ability to build more complex and powerful applications, with a reduced requirement for the developer to write repetitive code. For example, the process of maintaining values in HTML form controls and posting these values back to the server ('round tripping') requires quite a lot of code to maintain the state within the page. ASP+ does all this work for you automatically. At the same time, the world out there is changing. The proportion of users that will access your site through an 'Internet device' such as a mobile cellular phone, personal digital assistant (PDA), TV set-top box, games console, or other device will soon be greater that the number using a PC and a traditional Web browser. ASP+ provides solutions that help to reduce the work required for coping with these disparate types of client. The rapidly changing nature of distributed applications requires faster development, more componentization and re-usability, and wider general platform support. New standards such as the Simple Object Access Protocol (SOAP) and new commercial requirements such as business-to-business (B2B) data interchange require new techniques to be used to generate output and communicate with other systems. To meet all these requirements, ASP has been totally revamped from the ground up into a whole new programming environment that includes: In the remainder of this book, we'll examine all thee topics in more detail, and show you how you can use ASP+ to build powerful and interactive Web-based distributed applications more quickly and efficiently than ever before. In the remainder of this book, we'll examine all thee topics in more detail, and show you how you can use ASP+ to build powerful and interactive Web-based distributed applications more quickly and efficiently than ever.
http://www.codeproject.com/Articles/843/A-Preview-of-Active-Server-Pages-Chapter-Introdu?msg=556322
CC-MAIN-2015-14
refinedweb
11,924
51.48
Scala/Import Importation in Scala is a mechanism that enables more direct reference of different entities such as packages, classes, objects, instances, fields and methods. The concept of importing the code is more flexible in scala than Java or C++. - The import statements can be anywhere. It could be at the start of class, with in a class or object or with in method or block. - Members can be renamed or hidden while importing. - Packages, Classes or objects can be imported into the current scope. Contents Import[edit] Any importation is defined using the keyword "import". A simple example of importing an object contained in a package: package p1 { class A } object B { val x = new p1.A import p1.A val y = new A } In lines 1-3, a package named "p1" containing a class named "A" is declared. In lines 4-8, an object named "B" is defined. At line 5, a value named "x" is assigned a new instance of class "A" using the full identifier "p1.A" for class "A". At line 6, the keyword "import" is used, followed by the identifier "p1.A". The "import" declaration makes it possible for all following entities in the scope to refer directly to the last part of the identifier in the declaration, here the class "A". In the seventh line, the class "A" is referred to directly without having to use its full identifier. Entities that can be imported includes members such as fields and methods of objects: import math.Pi import math.round println("Pi is: " + Pi) //Prints "Pi is: 3.141592653589793". println("Pi rounded is: " + round(Pi)) //Prints "Pi rounded is: 3". In the first line, the field named "Pi" of object "math" is imported, making it available to entities following the import declaration. In the second line, the method named "round" of object "math" is likewise imported. In the fourth and fifth line, "Pi" and "round" is referred to directly without the normal required identifiers "math.Pi" and "math.round". The members of instances can also be imported: class Cube(val a:Double) def volume(cube:Cube) = { import cube.a a*a*a } val cube1 = new Cube(4) println("The volume of cube1 is: " + volume(cube1)) //Prints "The volume of cube1 is: 64.0". In the first line a class named "Cube" with a value named "a" indicating side-length is declared. In lines 2-5, a method calculating a cube's volume is defined. At line 3, the field "a" of the argument named "cube" is imported, and in line 4 "a" is referred to directly. A new cube is instantiated in line 6, and its volume is printed in line 7. Packages can also be imported: package p1.p2 { class A } import p1.p2 val a = new p2.A In the first 3 lines, two nested packages named "p1" and "p2" are declared, with "p2" containing a class named "A". At the fifth line, "p2" is imported. At the seventh line, "p2" is referred to directly without having to qualify it with "p1.p2". You can import multiple classes from a package in the following way. import math.{Pi,round} math.Pi and math.round classes are imported from the math package in the single import statement. Wildcard[edit] The _ character is used as the wild character in Scala. It is similar to the * character in Java. In the below example everything is imported from the math package. import math._ println("Pi is: " + Pi) //Prints "Pi is: 3.141592653589793". println("Pi rounded is: " + round(Pi)) //Prints "Pi rounded is: 3". Import selector clause[edit] Renaming[edit] To avoid namespace collisions you may have to rename the members while importing into the scope. In the below example Pi is renamed as myPi and round as myRound while importing from the math package so that it doesn't collide with the local Pi and round members of the object. import math.{Pi => myPi,round => myRound} object myMaths extends App { val Pi = 3.1 val round = TRUE println("Pi is: " + myPi) //Prints "Pi is: 3.141592653589793". println("Pi rounded is: " + myRound(myPi)) //Prints "Pi rounded is: 3". } Hiding[edit] Scala provides an option to hide one or more classes while importing other members from the same package. import math.{Pi => _, _} object myMaths extends App { println("Pi is: " + Pi) // Error, Pi is hidden } In the above example Pi is hidden and every thing else from the math package is imported into the scope. Accessing Pi in the example would give as an error as the Pi is hidden while importing. Limiting the scope[edit] You can use an import statment anywhere and this allows to control the scope of the imported members. You can use the import statement at the top of the package. package importTesting import math.Pi class test1 { def printPi { println("Pi is: " + Pi) //Successful } } class test2 { def printMyPi { println("Pi is :" + Pi) //Successful } } In the above example math.pi is imported at the top of the package and scope of the math.pi is throughout the package. So it can be used in all the classes with in the package. You can use import statement with in the class. package importTesting class test1 { import math.Pi def printPi { println("Pi is: " + Pi) //Successfull } } class test2 { def printMyPi { println("Pi is :" + Pi) //Fail as the scope of Pi is limited to class test1 } } In the above example math.pi is imported within the class test1 and the scope of the Pi is limited within the class test1. and it can not be accessed from the class test2. You can use the import statement within the method of class and limit the scope to that member. package importTesting class test1 { def printPi { import math.Pi println("Pi is: " + Pi) //Successfull } def printMyPi { println("Pi is :" + Pi) //Fail as the scope of Pi is limited to the method } }
https://en.wikibooks.org/wiki/Scala/Import
CC-MAIN-2017-09
refinedweb
986
66.03
timer_delete - delete a per-process timer (REALTIME) #include <time.h> int timer_delete(timer_t timerid); The timer_delete() function deletes the specified timer, timerid, previously created by the timer_create() function. If the timer is armed when timer_delete() is called, the behaviour will be as if the timer is automatically disarmed before removal. The disposition of pending signals for the deleted timer is unspecified. If successful, the function returns a value of zero. Otherwise, the function returns a value of -1 and sets errno to indicate the error. The timer_delete() function will fail if: - [EINVAL] - The timer ID specified by timerid is not a valid timer ID. - [ENOSYS] - The function timer_delete() is not supported by this implementation. None. None. None. timer_create(), <time.h>. Derived from the POSIX Realtime Extension (1003.1b-1993/1003.1i-1995)
http://pubs.opengroup.org/onlinepubs/7908799/xsh/timer_delete.html
CC-MAIN-2018-39
refinedweb
132
66.03
Class::Value - Implements the Value Object Design Pattern version 1.100840 Class::Value::Boolean->new("Y"); # ok Class::Value::Boolean->new("hoge"); # fails This class, and other classes in its namespace, implement the Value Object Design Pattern. A value object encapsulates its value and adds semantic information. For example, an IPv4 address is not just a string of arbitrary characters. You can detect whether a given string is a well-formed IPv4 address and perform certain operations on it. Value objects provide a consistent interface to do this for all kinds of semantically-enhanced values. This is the method to use when getting or setting values. It includes the checks for well-formedness and validity. The values of $SkipChecks and $SkipNormalizations are respected. When skipping checks, we still try to normalize the value, unless told not to by $SkipNormalizations. If we can't normalize it, that is, normalize_value() returns undef, we use the value we were given. We don't try to normalize an undefined value lest some overridden normalize_value() method checks via assert_defined(). Skipping even normalization is useful if you want to purposefully set an denormalized value and later check whether run_checks() properly normalizes it. If you absolutely need to set an value that will not be validated, use set_value() for that. The value stored by set_value() could be anything - a scalar, array, hash, coderef, another object etc. Specific value objects could also accept various input and decode it into the underlying components. Consider, for example, a date value class that internally stores year, month and day components, for example using accessors generated by Class::Accessor::Complex. The public interface for the value object would still be value(); outside code never needs to know what's happening behind the scenes. However, outside code could call year(), month() and day() on the value object and get at the components. The date value class would override set_value() to parse the input and split it into the components. It would also override is_well_formed_value() and/or is_valid_value() to do some checking. And it would override get_value to return the components joined up into a date string. Thus, stringify() would continue to work as expected. This is called by the constructor before setting attributes with the constructor's arguments. If the number of arguments is an odd value, it prepends the word value. This allows you to write: My::Class::Value->new('my_value'); instead of having to write My::Class::Value->new(value => 'my_value'); This method is used as the overload handler for the + operation. It can be overridden in subclasses. In this base class it throws an Class::Value::Exception::UnsupportedOperation exception. Like add() but affects the atan2 operation. Like add() but affects the & operation. Like add() but affects the ~ operation. Like add() but affects the | operation. Like add() but affects the << operation. Like add() but affects the >> operation. Like add() but affects the ^ operation. Takes the argument value, if given, or the object's currently set value and checks whether it is well-formed and valid. As support for Data::Comparable, this method stringifies the value object. If no value is set, it returns the empty string. Like add() but affects the cos operation. Like add() but affects the / operation. Like add() but affects the exp operation. Returns the value object's currently stored value. Like add() but affects the int operation. Returns whether the currently stored value is defined. If there is no value set on the value object, it will return undef. Takes the argument value, if given, or the object's currently set value and checks whether it is valid. Takes a normalized value as an argument and checks whether it is well-formed. Takes a value as an argument, normalizes it and checks whether the normalized value is valid. If normalization fails, that is, if it returns undef, then this method will return 0. If the argument value - before normalization - is not defined, this method returns 1, because when no value has been set on the value object yet we don't want it to report an invalid value. If you need a different behaviour, subclass and override this method. Takes the argument value, if given, or the object's currently set value and checks whether it is well-formed. Takes an argument value and checks whether it is well-formed. In this base class, this method always returns 1. Like add() but affects the <> operation. Like add() but affects the log operation. Like add() but affects the % operation. Like add() but affects the * operation. Takes an argument value and tries to normalize it. If the normalized value is different from the argument value, it sends out a notification by calling send_notify_value_normalized() - this will most likely be just informational. If normalization failed because normalize_value() returned undef, a different notification is being sent using send_notify_value_invalid(). Takes an argument value and normalizes it. In this base class, the value is returned as is, but in subclasses you could customize this behaviour. For example, in a date-related value object you might accept various date input formats but normalize it to one specific format. Also see Class::Value::Boolean and its superclass Class::Value::Enum as an example. This method should just normalize the value and return undef if it can't do so. It should not send any notifications; they are handled by normalize(). Like add() but affects the <=> operation. Like add() but affects the ** operation. Takes the argument value, if given, or the object's currently set value and checks whether it is valid and well-formed. The value object's exception container is cleared before those checks so notification delegates can store exceptions in this container. If only one exception has been recorded and $ThrowSingleException is true then this individual exception will be thrown. Otherwise the whole exception container will be thrown. Like run_checks(), except that it takes as an additional first argument an exception container object. The exceptions accumulated during the checks are stored in this container. The container will not be thrown. This is useful if you have various value objects and you want to accumulate all exceptions in one big exception container. The value object's own exception container will be modified during the checks and cleared afterwards. Calls notify_value_invalid() on the notification delegate. Calls notify_value_normalized() on the notification delegate. Calls notify_value_not_wellformed() on the notification delegate. Directly sets the argument value on the value object without any checks. Like add() but affects the sin operation. You can tell the value object to omit all checks that would be run during value(). If this method is given an argument value, it will set the package-global $SkipChecks. If no argument is given, it will return the current value of $SkipChecks. Note that this is not a per-object setting. Temporarily skipping checks is useful if you want to set a series of value objects to values that might be invalid, for example when reading them from a data source such as a database or a configuration file, and later call run_checks_with_exception_container() on all of them. You can use Class::Value->skip_checks(1); to temporarily skip checks, but you have to remember to call Class::Value->skip_checks(0); afterwards. An alternative way would be to bypass this method: { local $Class::Value::SkipChecks = 1; ... } Like skip_checks(), but it affects the normalizations that would be done during value(). Like add() but affects the sqrt operation. Like add() but affects the cmp operation. Like add() but affects the "" operation. Like add() but affects the - operation. Like skip_checks(), but affects $ThrowSingleException..
http://search.cpan.org/~marcel/Class-Value-1.100840/lib/Class/Value.pm
CC-MAIN-2014-42
refinedweb
1,260
50.33
Creating a Generic Lookup Service In a previous post I talked about how we can manage lookup tables with Entity Framework Code First. In that post I suggested using the primary key directly to check for a specific lookup, rather than introducing an arbitrary column that will serve as an identifier. In this post I will talk about how we can make a generic lookup service to simplify how we show lookup values in, say, dropdown lists. The Background Lookup entities often appear as items in a select list. What I often see in code is that there is one service created for each lookup class. Some examples are: ProvinceService AccountTypeService OrderStatusService Then, each of those services would have a method that gets all the entries for purposes of populating a dropdown list. They all tend to have the same logic and structure, so what we can do is create a single service that would return what we need to populate dropdown lists. Fortunately, it's quite straightforward to do this. The first step is to introduce an interface. Introducing a Common Interface So first we need to have all of our lookup entities ( Province, AccountType, OrderStatus, etc.) share a common interface. That way, we will be able to deal with them in a consistent manner. We can start with an interface like this: public interface ILookup { int Id { get; set; } string Text { get; set; } } The interface is very simple, containing only two properties. Id is the value that would appear in the dropdown and would be the entity's primary key. Text is the text that would eventually show in the dropdown list. We will have all of our lookup entities implement this interface. For example: public class Province : ILookup { public int Id { get; set; } public string Text { get; set; } // Possibly other properties } The next step is to create a service that would be able to access the lookup list entities. Creating the Service So we can introduce a LookupService class that knows how to get a list of any kind of lookup entity. The key is a generic method that works with our DbContext class. It would look something like this: public IEnumerable<ILookup> GetAllLookupItems<T>() where T : class, ILookup { using (var db = new ApplicationContext()) { return db.Set<T>().AsNoTracking().ToList(); } } This short method is all we need. The important pieces here are the generic constraints. The class constraint enables us to use the Set<T> method on our context class. The ILookup constraint enables us to return any type of lookup entity list, as long as the entity implements the ILookup interface. We also used the AsNoTracking method to improve performance as well as the ToList method at the end so that Entity Framework executes the query immediately. We can use the LookupService like this: var lookupService = new LookupService(); var orderStatuses = lookupService.GetAllLookupItems<OrderStatus>(); // orderStatuses would be an enumerable of ILookup items // That is, an enumerable of items containing Id and Text properties var accountTypes = lookupService.GetAllLookupItems<AccountType>(); // accountTypes would also be an enumerable of ILookup items From here, we can do anything we want with our orderStatuses. If we are using the MVC framework's HTML helpers, we can create a SelectList out of it. Or, we can return it directly as a JSON result. The key point is that we have created a centralized service that is responsible for querying lookup items. If ever we need to introduce a new lookup entity, all we need to do is have it implement the ILookup interface. We would not need to create a separate service for it anymore (at least not for the purposes of creating a dropdown list for it). By harnessing the power of interfaces and generics, we have created a mini-framework for lookups that saves us a lot of time. Conclusion In this post we talked about how we can create a generic service that can retrieve lookup entity items that would work across all types of lookup entities. This eliminates the need to create a separate service for each lookup entity.
https://www.ojdevelops.com/2016/04/creating-generic-lookup-service.html
CC-MAIN-2021-10
refinedweb
679
61.26
Member 8 Points Jun 07, 2010 04:53 PM|Gentletouch|LINK I am trying to convert an existing ASP.NET website to use dynamic data. I have created a generic dynamic data site using my existing database file which works fine. I have imported the Global.aspx, site.master, site.css, default.aspx and dynamicdata folders into my existing site, I have set up the global page to use my Linq to SQL datacontext and set global to use scaffold but now encounter an odd error. When I run the default page I get the list of my DB tables as expected and am able to click on them to display the List.aspx table. At this point if I click on details or edit I get the following error: ASPNET: Make sure that the class defined in this code file matches the 'inherits' attribute, and that it extends the correct base class (e.g. Page or UserControl). I have compared all the files in my functioning basic dynamic data site with my modified site and they all seem the same! Can anyone tell me what this means? Regards All-Star 17915 Points MVP Jun 08, 2010 12:04 PM|sjnaughton|LINK Hi Gentletouch, it sounds like a namespace issue to me, I assume this is a Web Application Project not a file based Website? Dynamic Data Member 8 Points Jun 08, 2010 12:53 PM|Gentletouch|LINK Yes you are right. I added: Imports System.ComponentModel.DataAnnotations Imports System.Web.DynamicData Imports System.Web Added to all the code behind files for all the items in the filters and field templates folders under dynamic data. Not sure why I have to do that or whether there is an easier way of adding these namespaces just the once. It is a file based site in development Anyway it now works perfectly. All-Star 17915 Points MVP Jun 08, 2010 01:04 PM|sjnaughton|LINK Excellent [:)] Dynamic Data 3 replies Last post Jun 08, 2010 01:04 PM by sjnaughton
https://forums.asp.net/t/1566454.aspx?Enabling+dymanic+data+in+an+existing+ASP+Net+site+
CC-MAIN-2019-18
refinedweb
339
63.29
Sentiment the sentiment of a piece of writing. Why would you want to do that? There are a lot of uses for sentiment analysis, such as understanding how stock traders feel about a particular company by using social media data or aggregating reviews, which you’ll get to do by the end of this tutorial. In this tutorial, you’ll learn: - How to use natural language processing (NLP) techniques - How to use machine learning to determine the sentiment of text - How to use spaCy to build an NLP pipeline that feeds into a sentiment analysis classifier This tutorial is ideal for beginning machine learning practitioners who want a project-focused guide to building sentiment analysis pipelines with spaCy. You should be familiar with basic machine learning techniques like binary classification as well as the concepts behind them, such as training loops, data batches, and weights and biases. If you’re unfamiliar with machine learning, then you can kickstart your journey by learning about logistic regression. When you’re ready, you can follow along with the examples in this tutorial by downloading the source code from the link below: Get the Source Code: Click here to get the source code you’ll use to learn about sentiment analysis with natural language processing in this tutorial. Using Natural Language Processing to Preprocess and Clean Text Data Any sentiment analysis workflow begins with loading data. But what do you do once the data’s been loaded? You need to process it through a natural language processing pipeline before you can do anything interesting with it. The necessary steps include (but aren’t limited to) the following: - Tokenizing sentences to break text down into sentences, words, or other units - Removing stop words like “if,” “but,” “or,” and so on - Normalizing words by condensing all forms of a word into a single form - Vectorizing text by turning the text into a numerical representation for consumption by your classifier All these steps serve to reduce the noise inherent in any human-readable text and improve the accuracy of your classifier’s results. There are lots of great tools to help with this, such as the Natural Language Toolkit, TextBlob, and spaCy. For this tutorial, you’ll use spaCy. Note: spaCy is a very powerful tool with many features. For a deep dive into many of these features, check out Natural Language Processing With spaCy. Before you go further, make sure you have spaCy and its English model installed: $ pip install spacy $ python -m spacy download en_core_web_sm The first command installs spaCy, and the second uses spaCy to download its English language model. spaCy supports a number of different languages, which are listed on the spaCy website. Next, you’ll learn how to use spaCy to help with the preprocessing steps you learned about earlier, starting with tokenization. Tokenizing Tokenization is the process of breaking down chunks of text into smaller pieces. spaCy comes with a default processing pipeline that begins with tokenization, making this process a snap. In spaCy, you can do either sentence tokenization or word tokenization: - Word tokenization breaks text down into individual words. - Sentence tokenization breaks text down into individual sentences. In this tutorial, you’ll use word tokenization to separate the text into individual words. First, you’ll load the text into spaCy, which does the work of tokenization for you: >>> import spacy >>>>> nlp = spacy.load("en_core_web_sm") >>> doc = nlp(text) >>> token_list = [token for token in doc] >>> token_list [ ,, ., ] In this code, you set up some example text to tokenize, load spaCy’s English model, and then tokenize the text by passing it into the nlp constructor. This model includes a default processing pipeline that you can customize, as you’ll see later in the project section. After that, you generate a list of tokens and print it. As you may have noticed, “word tokenization” is a slightly misleading term, as captured tokens include punctuation and other nonword strings. Tokens are an important container type in spaCy and have a very rich set of features. In the next section, you’ll learn how to use one of those features to filter out stop words. Removing Stop Words Stop words are words that may be important in human communication but are of little value for machines. spaCy comes with a default list of stop words that you can customize. For now, you’ll see how you can use token attributes to remove stop words: >>> filtered_tokens = [token for token in doc if not token.is_stop] >>> filtered_tokens [ , Dave, watched, forest, burned, hill, ,, , miles, house, ., car, , hastily, packed, Marta, inside, trying, round, , pets, ., ", ?, ", wondered, , continued, wait, Marta, appear, pets, ., ] In one line of Python code, you filter out stop words from the tokenized text using the .is_stop token attribute. What differences do you notice between this output and the output you got after tokenizing the text? With the stop words removed, the token list is much shorter, and there’s less context to help you understand the tokens. Normalizing Words Normalization is a little more complex than tokenization. It entails condensing all forms of a word into a single representation of that word. For instance, “watched,” “watching,” and “watches” can all be normalized into “watch.” There are two major normalization methods: - Stemming - Lemmatization With stemming, a word is cut off at its stem, the smallest unit of that word from which you can create the descendant words. You just saw an example of this above with “watch.” Stemming simply truncates the string using common endings, so it will miss the relationship between “feel” and “felt,” for example. Lemmatization seeks to address this issue. This process uses a data structure that relates all forms of a word back to its simplest form, or lemma. Because lemmatization is generally more powerful than stemming, it’s the only normalization strategy offered by spaCy. Luckily, you don’t need any additional code to do this. It happens automatically—along with a number of other activities, such as part of speech tagging and named entity recognition—when you call nlp(). You can inspect the lemma for each token by taking advantage of the .lemma_ attribute: >>> lemmas = [ ... f"Token: {token}, lemma: {token.lemma_}" ... for token in filtered_tokens ... ] >>> lemmas ['Token: \n, lemma: \n', 'Token: Dave, lemma: Dave', 'Token: watched, lemma: watch', 'Token: forest, lemma: forest', # ... ] All you did here was generate a readable list of tokens and lemmas by iterating through the filtered list of tokens, taking advantage of the .lemma_ attribute to inspect the lemmas. This example shows only the first few tokens and lemmas. Your output will be much longer. Note: Notice the underscore on the .lemma_ attribute. That’s not a typo. It’s a convention in spaCy that gets the human-readable version of the attribute. The next step is to represent each token in way that a machine can understand. This is called vectorization. Vectorizing Text Vectorization is a process that transforms a token into a vector, or a numeric array that, in the context of NLP, is unique to and represents various features of a token. Vectors are used under the hood to find word similarities, classify text, and perform other NLP operations. This particular representation is a dense array, one in which there are defined values for every space in the array. This is in opposition to earlier methods that used sparse arrays, in which most spaces are empty. Like the other steps, vectorization is taken care of automatically with the nlp() call. Since you already have a list of token objects, you can get the vector representation of one of the tokens like so: >>> filtered_tokens[1].vector array([ 1.8371646 , 1.4529226 , -1.6147211 , 0.678362 , -0.6594443 , 1.6417935 , 0.5796405 , 2.3021278 , -0.13260496, 0.5750932 , 1.5654886 , -0.6938864 , -0.59607106, -1.5377437 , 1.9425622 , -2.4552505 , 1.2321601 , 1.0434952 , -1.5102385 , -0.5787632 , 0.12055647, 3.6501784 , 2.6160972 , -0.5710199 , -1.5221789 , 0.00629176, 0.22760668, -1.922073 , -1.6252862 , -4.226225 , -3.495663 , -3.312053 , 0.81387717, -0.00677544, -0.11603224, 1.4620426 , 3.0751472 , 0.35958546, -0.22527039, -2.743926 , 1.269633 , 4.606786 , 0.34034157, -2.1272311 , 1.2619178 , -4.209798 , 5.452852 , 1.6940253 , -2.5972986 , 0.95049495, -1.910578 , -2.374927 , -1.4227567 , -2.2528825 , -1.799806 , 1.607501 , 2.9914255 , 2.8065152 , -1.2510269 , -0.54964066, -0.49980402, -1.3882618 , -0.470479 , -2.9670253 , 1.7884955 , 4.5282774 , -1.2602427 , -0.14885521, 1.0419178 , -0.08892632, -1.138275 , 2.242618 , 1.5077229 , -1.5030195 , 2.528098 , -1.6761329 , 0.16694719, 2.123961 , 0.02546412, 0.38754445, 0.8911977 , -0.07678384, -2.0690763 , -1.1211847 , 1.4821006 , 1.1989193 , 2.1933236 , 0.5296372 , 3.0646474 , -1.7223308 , -1.3634219 , -0.47471118, -1.7648507 , 3.565178 , -2.394205 , -1.3800384 ], dtype=float32) Here you use the .vector attribute on the second token in the filtered_tokens list, which in this set of examples is the word Dave. Note: If you get different results for the .vector attribute, don’t worry. This could be because you’re using a different version of the en_core_web_sm model or, potentially, of spaCy itself. Now that you’ve learned about some of the typical text preprocessing steps in spaCy, you’ll learn how to classify text. Using Machine Learning Classifiers to Predict Sentiment Your text is now processed into a form understandable by your computer, so you can start to work on classifying it according to its sentiment. You’ll cover three topics that will give you a general understanding of machine learning classification of text data: - What machine learning tools are available and how they’re used - How classification works - How to use spaCy for text classification First, you’ll learn about some of the available tools for doing machine learning classification. Machine Learning Tools There are a number of tools available in Python for solving classification problems. Here are some of the more popular ones: This list isn’t all-inclusive, but these are the more widely used machine learning frameworks available in Python. They’re large, powerful frameworks that take a lot of time to truly master and understand. TensorFlow is developed by Google and is one of the most popular machine learning frameworks. You use it primarily to implement your own machine learning algorithms as opposed to using existing algorithms. It’s fairly low-level, which gives the user a lot of power, but it comes with a steep learning curve. PyTorch is Facebook’s answer to TensorFlow and accomplishes many of the same goals. However, it’s built to be more familiar to Python programmers and has become a very popular framework in its own right. Because they have similar use cases, comparing TensorFlow and PyTorch is a useful exercise if you’re considering learning a framework. scikit-learn stands in contrast to TensorFlow and PyTorch. It’s higher-level and allows you to use off-the-shelf machine learning algorithms rather than building your own. What it lacks in customizability, it more than makes up for in ease of use, allowing you to quickly train classifiers in just a few lines of code. Luckily, spaCy provides a fairly straightforward built-in text classifier that you’ll learn about a little later. First, however, it’s important to understand the general workflow for any sort of classification problem. How Classification Works Don’t worry—for this section you won’t go deep into linear algebra, vector spaces, or other esoteric concepts that power machine learning in general. Instead, you’ll get a practical introduction to the workflow and constraints common to classification problems. Once you have your vectorized data, a basic workflow for classification looks like this: - Split your data into training and evaluation sets. - Select a model architecture. - Use training data to train your model. - Use test data to evaluate the performance of your model. - Use your trained model on new data to generate predictions, which in this case will be a number between -1.0 and 1.0. This list isn’t exhaustive, and there are a number of additional steps and variations that can be done in an attempt to improve accuracy. For example, machine learning practitioners often split their datasets into three sets: - Training - Validation - Test The training set, as the name implies, is used to train your model. The validation set is used to help tune the hyperparameters of your model, which can lead to better performance. Note: Hyperparameters control the training process and structure of your model and can include things like learning rate and batch size. However, which hyperparameters are available depends very much on the model you choose to use. The test set is a dataset that incorporates a wide variety of data to accurately judge the performance of the model. Test sets are often used to compare multiple models, including the same models at different stages of training. Now that you’ve learned the general flow of classification, it’s time to put it into action with spaCy. How to Use spaCy for Text Classification You’ve already learned how spaCy does much of the text preprocessing work for you with the nlp() constructor. This is really helpful since training a classification model requires many examples to be useful. Additionally, spaCy provides a pipeline functionality that powers much of the magic that happens under the hood when you call nlp(). The default pipeline is defined in a JSON file associated with whichever preexisting model you’re using ( en_core_web_sm for this tutorial), but you can also build one from scratch if you wish. Note: To learn more about creating your own language processing pipelines, check out the spaCy pipeline documentation. What does this have to do with classification? One of the built-in pipeline components that spaCy provides is called textcat (short for TextCategorizer), which enables you to assign categories (or labels) to your text data and use that as training data for a neural network. This process will generate a trained model that you can then use to predict the sentiment of a given piece of text. To take advantage of this tool, you’ll need to do the following steps: - Add the textcatcomponent to the existing pipeline. - Add valid labels to the textcatcomponent. - Load, shuffle, and split your data. - Train the model, evaluating on each training loop. - Use the trained model to predict the sentiment of non-training data. - Optionally, save the trained model. Note: You can see an implementation of these steps in the spaCy documentation examples. This is the main way to classify text in spaCy, so you’ll notice that the project code draws heavily from this example. In the next section, you’ll learn how to put all these pieces together by building your own project: a movie review sentiment analyzer. Building Your Own NLP Sentiment Analyzer From the previous sections, you’ve probably noticed four major stages of building a sentiment analysis pipeline: - Preprocessing - Training the classifier - Classifying data For building a real-life sentiment analyzer, you’ll work through each of the steps that compose these stages. You’ll use the Large Movie Review Dataset compiled by Andrew Maas to train and test your sentiment analyzer. Once you’re ready, proceed to the next section to load your data. If you haven’t already, download and extract the Large Movie Review Dataset. Spend a few minutes poking around, taking a look at its structure, and sampling some of the data. This will inform how you load the data. For this part, you’ll use spaCy’s textcat example as a rough guide. You can (and should) decompose the loading stage into concrete steps to help plan your coding. Here’s an example: - Load text and labels from the file and directory structures. - Shuffle the data. - Split the data into training and test sets. - Return the two sets of data. This process is relatively self-contained, so it should be its own function at least. In thinking about the actions that this function would perform, you may have thought of some possible parameters. Since you’re splitting data, the ability to control the size of those splits may be useful, so split is a good parameter to include. You may also wish to limit the total amount of documents you process with a limit parameter. You can open your favorite editor and add this function signature: def load_training_data( data_directory: str = "aclImdb/train", split: float = 0.8, limit: int = 0 ) -> tuple: With this signature, you take advantage of Python 3’s type annotations to make it absolutely clear which types your function expects and what it will return. The parameters here allow you to define the directory in which your data is stored as well as the ratio of training data to test data. A good ratio to start with is 80 percent of the data for training data and 20 percent for test data. All of this and the following code, unless otherwise specified, should live in the same file. Next, you’ll want to iterate through all the files in this dataset and load them into a list: import os)) While this may seem complicated, what you’re doing is constructing the directory structure of the data, looking for and opening text files, then appending a tuple of the contents and a label dictionary to the reviews list. The label dictionary structure is a format required by the spaCy model during the training loop, which you’ll see soon. Note: Throughout this tutorial and throughout your Python journey, you’ll be reading and writing files. This is a foundational skill to master, so make sure to review it while you work through this tutorial. Since you have each review open at this point, it’s a good idea to replace the <br /> HTML tags in the texts with newlines and to use .strip() to remove all leading and trailing whitespace. For this project, you won’t remove stop words from your training data right away because it could change the meaning of a sentence or phrase, which could reduce the predictive power of your classifier. This is dependent somewhat on the stop word list that you use. After loading the files, you want to shuffle them. This works to eliminate any possible bias from the order in which training data is loaded. Since the random module makes this easy to do in one line, you’ll also see how to split your shuffled data: import os import random)) random.shuffle(reviews) if limit: reviews = reviews[:limit] split = int(len(reviews) * split) return reviews[:split], reviews[split:] Here, you shuffle your data with a call to random.shuffle(). Then you optionally truncate and split the data using some math to convert the split to a number of items that define the split boundary. Finally, you return two parts of the reviews list using list slices. Here’s a sample output, truncated for brevity: ( 'When tradition dictates that an artist must pass (...)', {'cats': {'pos': True, 'neg': False}} ) To learn more about how random works, take a look at Generating Random Data in Python (Guide). Note: The makers of spaCy have also released a package called thinc that, among other features, includes simplified access to large datasets, including the IMDB review dataset you’re using for this project. You can find the project on GitHub. If you investigate it, look at how they handle loading the IMDB dataset and see what overlaps exist between their code and your own. Now that you’ve got your data loader built and have some light preprocessing done, it’s time to build the spaCy pipeline and classifier training loop. Training Your Classifier Putting the spaCy pipeline together allows you to rapidly build and train a convolutional neural network (CNN) for classifying text data. While you’re using it here for sentiment analysis, it’s general enough to work with any kind of text classification task as long as you provide it with the training data and labels. In this part of the project, you’ll take care of three steps: - Modifying the base spaCy pipeline to include the textcatcomponent - Building a training loop to train the textcatcomponent - Evaluating the progress of your model training after a given number of training loops First, you’ll add textcat to the default spaCy pipeline. Modifying the spaCy Pipeline to Include textcat For the first part, you’ll load the same pipeline as you did in the examples at the beginning of this tutorial, then you’ll add the textcat component if it isn’t already present. After that, you’ll add the labels that your data uses ( "pos" for positive and "neg" for negative) to textcat. Once that’s done, you’ll be ready to build the training loop:) If you’ve looked at the spaCy documentation’s textcat example already, then this should look pretty familiar. First, you load the built-in en_core_web_sm pipeline, then you check the .pipe_names attribute to see if the textcat component is already available. If it isn’t, then you create the component (also called a pipe) with .create_pipe(), passing in a configuration dictionary. There are a few options that you can work with described in the TextCategorizer documentation. Finally, you add the component to the pipeline using .add_pipe(), with the last parameter signifying that this component should be added to the end of the pipeline. Next, you’ll handle the case in which the textcat component is present and then add the labels that will serve as the categories for your text:") If the component is present in the loaded pipeline, then you just use .get_pipe() to assign it to a variable so you can work on it. For this project, all that you’ll be doing with it is adding the labels from your data so that textcat knows what to look for. You’ll do that with .add_label(). You’ve created the pipeline and prepared the textcat component for the labels it will use for training. Now it’s time to write the training loop that will allow textcat to categorize movie reviews. Build Your Training Loop to Train textcat To begin the training loop, you’ll first set your pipeline to train only the textcat component, generate batches of data for it with spaCy’s minibatch() and compounding() utilities, and then go through them and update your model. A batch is just a subset of your data. Batching your data allows you to reduce the memory footprint during training and more quickly update your hyperparameters. Note: Compounding batch sizes is a relatively new technique and should help speed up training. You can learn more about compounding batch sizes in spaCy’s training tips. Here’s an implementation of the training loop described above: 1import os 2import random 3import spacy 4from spacy.util import minibatch, compounding 5 6def train_model( 7 training_data: list, 8 test_data: list, 9 iterations: int = 20 10) -> None: 11 # Build pipeline 12 nlp = spacy.load("en_core_web_sm") 13 if "textcat" not in nlp.pipe_names: 14 textcat = nlp.create_pipe( 15 "textcat", config={"architecture": "simple_cnn"} 16 ) 17 nlp.add_pipe(textcat, last=True) 18 else: 19 textcat = nlp.get_pipe("textcat") 20 21 textcat.add_label("pos") 22 textcat.add_label("neg") 23 24 # Train only textcat 25 training_excluded_pipes = [ 26 pipe for pipe in nlp.pipe_names if pipe != "textcat" 27 ] On lines 25 to 27, you create a list of all components in the pipeline that aren’t the textcat component. You then use the nlp.disable() context manager to disable those components for all code within the context manager’s scope. Now you’re ready to add the code to begin training: Here, you call nlp.begin_training(), which returns the initial optimizer function. This is what nlp.update() will use to update the weights of the underlying model. You then use the compounding() utility to create a generator, giving you an infinite series of batch_sizes that will be used later by the minibatch() utility. Now you’ll begin training on batches of data: ) Now, for each iteration that is specified in the train_model() signature, you create an empty dictionary called loss that will be updated and used by nlp.update(). You also shuffle the training data and split it into batches of varying size with minibatch(). For each batch, you separate the text and labels, then fed them, the empty loss dictionary, and the optimizer to nlp.update(). This runs the actual training on each example. The dropout parameter tells nlp.update() what proportion of the training data in that batch to skip over. You do this to make it harder for the model to accidentally just memorize training data without coming up with a generalizable model. This will take some time, so it’s important to periodically evaluate your model. You’ll do that with the data that you held back from the training set, also known as the holdout set. Evaluating the Progress of Model Training Since you’ll be doing a number of evaluations, with many calculations for each one, it makes sense to write a separate evaluate_model() function. In this function, you’ll run the documents in your test set against the unfinished model to get your model’s predictions and then compare them to the correct labels of that data. Using that information, you’ll calculate the following values: True positives are documents that your model correctly predicted as positive. For this project, this maps to the positive sentiment but generalizes in binary classification tasks to the class you’re trying to identify. False positives are documents that your model incorrectly predicted as positive but were in fact negative. True negatives are documents that your model correctly predicted as negative. False negatives are documents that your model incorrectly predicted as negative but were in fact positive. Because your model will return a score between 0 and 1 for each label, you’ll determine a positive or negative result based on that score. From the four statistics described above, you’ll calculate precision and recall, which are common measures of classification model performance: Precision is the ratio of true positives to all items your model marked as positive (true and false positives). A precision of 1.0 means that every review that your model marked as positive belongs to the positive class. Recall is the ratio of true positives to all reviews that are actually positive, or the number of true positives divided by the total number of true positives and false negatives. The F-score is another popular accuracy measure, especially in the world of NLP. Explaining it could take its own article, but you’ll see the calculation in the code. As with precision and recall, the score ranges from 0 to 1, with 1 signifying the highest performance and 0 the lowest. For evaluate_model(), you’ll need to pass in the pipeline’s tokenizer component, the textcat component, and your test dataset: def evaluate_model( tokenizer, textcat, test_data: list ) -> dict: reviews, labels = zip(*test_data) reviews = (tokenizer(review) for review in reviews) true_positives = 0 false_positives = 1e-8 # Can't be 0 because of presence in denominator true_negatives = 0 false_negatives = 1e-8 for i, review in enumerate(textcat.pipe(reviews)): true_label = labels[i] for predicted_label, score in review.cats.items(): # Every cats dictionary includes both labels. You can get all # the info you need with just the pos label. if ( predicted_label == "neg" ): continue if score >= 0.5 and true_label["pos"]: true_positives += 1 elif score >= 0.5 and true_label["neg"]: false_positives += 1 elif score < 0.5 and true_label["neg"]: true_negatives += 1 elif score < 0.5 and true_label["pos"]: false_negatives += 1 precision = true_positives / (true_positives + false_positives) recall = true_positives / (true_positives + false_negatives) if precision + recall == 0: f_score = 0 else: f_score = 2 * (precision * recall) / (precision + recall) return {"precision": precision, "recall": recall, "f-score": f_score} In this function, you separate reviews and their labels and then use a generator expression to tokenize each of your evaluation reviews, preparing them to be passed in to textcat. The generator expression is a nice trick recommended in the spaCy documentation that allows you to iterate through your tokenized reviews without keeping every one of them in memory. You then use the score and true_label to determine true or false positives and true or false negatives. You then use those to calculate precision, recall, and f-score. Now all that’s left is to actually call evaluate_model(): def train_model(training_data: list, test_data: list, iterations: int = 20): # Previously seen code omitted for brevity. # Training loop print("Beginning training") print("Loss\tPrecision\tRecall\tF-score")']}" ) Here you add a print statement to help organize the output from evaluate_model() and then call it with the .use_params() context manager in order to use the model in its current state. You then call evaluate_model() and print the results. Once the training process is complete, it’s a good idea to save the model you just trained so that you can use it again without training a new model. After your training loop, add this code to save the trained model to a directory called model_artifacts located within your working directory: # Save model with nlp.use_params(optimizer.averages): nlp.to_disk("model_artifacts") This snippet saves your model to a directory called model_artifacts so that you can make tweaks without retraining the model. Your final training function should look like this:") print("Loss\tPrecision\tRecall\tF-score") batch_sizes = compounding( 4.0, 32.0, 1.001 ) # A generator that yields infinite series of input numbers for i in range(iterations): print(f"Training iteration {i}")']}" ) # Save model with nlp.use_params(optimizer.averages): nlp.to_disk("model_artifacts") In this section, you learned about training a model and evaluating its performance as you train it. You then built a function that trains a classification model on your input data. Classifying Reviews Now that you have a trained model, it’s time to test it against a real review. For the purposes of this project, you’ll hardcode a review, but you should certainly try extending this project by reading reviews from other sources, such as files or a review aggregator’s API. The first step with this new function will be to load the previously saved model. While you could use the model in memory, loading the saved model artifact allows you to optionally skip training altogether, which you’ll see later. Here’s the test_model() signature along with the code to load your saved model: def test_model(input_data: str=TEST_REVIEW): # Load saved trained model loaded_model = spacy.load("model_artifacts") In this code, you define test_model(), which includes the input_data parameter. You then load your previously saved model. The IMDB data you’re working with includes an unsup directory within the training data directory that contains unlabeled reviews you can use to test your model. Here’s one such review. You should save it (or a different one of your choosing) in a TEST_REVIEW constant at the top of your file: import os import random import spacy from spacy.util import minibatch, compounding TEST.) """ Next, you’ll pass this review into your model to generate a prediction, prepare it for display, and then display it to the user: def test_model(input_data: str = TEST_REVIEW): # Load saved trained model loaded_model = spacy.load("model_artifacts") # Generate prediction parsed_text = loaded_model(input_data) # Determine prediction to return if parsed_text.cats["pos"] > parsed_text.cats["neg"]: prediction = "Positive" score = parsed_text.cats["pos"] else: prediction = "Negative" score = parsed_text.cats["neg"] print( f"Review text: {input_data}\nPredicted sentiment: {prediction}" f"\tScore: {score}" ) In this code, you pass your input_data into your loaded_model, which generates a prediction in the cats attribute of the parsed_text variable. You then check the scores of each sentiment and save the highest one in the prediction variable. You then save that sentiment’s score to the score variable. This will make it easier to create human-readable output, which is the last line of this function. You’ve now written the load_data(), train_model(), evaluate_model(), and test_model() functions. That means it’s time to put them all together and train your first model. Connecting the Pipeline So far, you’ve built a number of independent functions that, taken together, will load data and train, evaluate, save, and test a sentiment analysis classifier in Python. There’s one last step to make these functions usable, and that is to call them when the script is run. You’ll use the if __name__ == "__main__": idiom to accomplish this: if __name__ == "__main__": train, test = load_training_data(limit=2500) train_model(train, test) print("Testing model") test_model() Here you load your training data with the function you wrote in the Loading and Preprocessing Data section and limit the number of reviews used to 2500 total. You then train the model using the train_model() function you wrote in Training Your Classifier and, once that’s done, you call test_model() to test the performance of your model. Note: With this number of training examples, training can take ten minutes or longer, depending on your system. You can reduce the training set size for a shorter training time, but you’ll risk having a less accurate model. What did your model predict? Do you agree with the result? What happens if you increase or decrease the limit parameter when loading the data? Your scores and even your predictions may vary, but here’s what you should expect your output to look like: $ python pipeline.py Training model Beginning training Loss Precision Recall F-score Testing model Review.) Predicted sentiment: Positive Score: 0.8773064017295837 As your model trains, you’ll see the measures of loss, precision, and recall and the F-score for each training iteration. You should see the loss generally decrease. The precision, recall, and F-score will all bounce around, but ideally they’ll increase. Then you’ll see the test review, sentiment prediction, and the score of that prediction—the higher the better. You’ve now trained your first sentiment analysis machine learning model using natural language processing techniques and neural networks with spaCy! Here are two charts showing the model’s performance across twenty training iterations.. The precision, recall, and F-score are pretty stable after the first few training iterations. What could you tinker with to improve these values? Conclusion Congratulations on building your first sentiment analysis model in Python! What did you think of this project? Not only did you build a useful tool for data analysis, but you also picked up on a lot of the fundamental concepts of natural language processing and machine learning. In this tutorial, you learned how to: - Use natural language processing techniques - Use a machine learning classifier to determine the sentiment of processed text data - Build your own NLP pipeline with spaCy You now have the basic toolkit to build more models to answer any research questions you might have. If you’d like to review what you’ve learned, then you can download and experiment with the code used in this tutorial at the link below: Get the Source Code: Click here to get the source code you’ll use to learn about sentiment analysis with natural language processing in this tutorial. What else could you do with this project? See below for some suggestions. Next Steps With Sentiment Analysis and Python This is a core project that, depending on your interests, you can build a lot of functionality around. Here are a few ideas to get you started on extending this project: The data-loading process loads every review into memory during load_data(). Can you make it more memory efficient by using generator functions instead? Rewrite your code to remove stop words during preprocessing or data loading. How does the mode performance change? Can you incorporate this preprocessing into a pipeline component instead? Use a tool like Click to generate an interactive command-line interface. Deploy your model to a cloud platform like AWS and wire an API to it. This can form the basis of a web-based tool. Explore the configuration parameters for the textcatpipeline component and experiment with different configurations. Explore different ways to pass in new reviews to generate predictions. Parametrize options such as where to save and load trained models, whether to skip training or train a new model, and so on. This project uses the Large Movie Review Dataset, which is maintained by Andrew Maas. Thanks to Andrew for making this curated dataset widely available for use.
https://realpython.com/sentiment-analysis-python/
CC-MAIN-2020-50
refinedweb
6,065
53.41
Feature #5582open Allow clone of singleton methods on a BasicObject Description Currently I do not know of a way to implement something like 'clone' on a BasicObject subclass. This is as close as I've gotten but as you can see the singleton methods are not propagated to the clone. Is there a way to do this that I don't see? If not, then I request that a way be added - perhaps by allowing the singleton_class to be set somehow.. Thank you. Updated by kernigh (George Koehler) over 9 years ago =begin My first attempt: module Clone include Kernel (instance_methods - [:clone, :initialize_clone]).each {|m| undef_method m} end b = BasicObject.new class << b include ::Clone def single; "Quack!"; end end c = b.clone puts c.single Output: scratch.rb:3: warning: undefining `object_id' may cause serious problems Quack! Clone inherits from Kernel, but undefines all its instance methods except Clone#clone and Clone#initialize_clone. This technique has some awful side effects: Kernel === b and Kernel === c become true. Clone might inherit metamethods from Kernel (because I only undefined instance methods, not metamethods). =end Updated by mame (Yusuke Endoh) over 9 years ago - Status changed from Open to Assigned - Assignee set to matz (Yukihiro Matsumoto) Updated by mame (Yusuke Endoh) over 8 years ago - Target version changed from 1.9.2 to 2.6 Updated by nobu (Nobuyoshi Nakada) over 8 years ago =begin 2.0 allows `method transplanting'. module Clone %i[clone initialize_copy initialize_dup initialize_clone].each do |m| define_method(m, Kernel.instance_method(m)) end end =end Updated by naruse (Yui NARUSE) over 3 years ago - Target version deleted ( 2.6) Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/5582
CC-MAIN-2021-31
refinedweb
275
68.16
Python for Kids: Python 3 Summary of Changes October 19, 2016 Leave a comment While my Python 3 posts seemed to stretch for pages and pages with differences, there actually aren’t very many changes at all. Most of that space is taken up by the code outputs (which often had only minor changes) and unchanged code (that had to be there for context). In fact, while the book is about 300 pages long, just a handful of changes are needed to get the whole of the code in the book to run in Python 3. Those changes (in alphabetical order by topic) are below. Check them out if you’re having trouble with your other Python 2.7 code: Links to the Python 3 updates for each of the Projects: Project 2, Project 3, Project 4, Project 5, Project 6, Project 7, Project 8, Project 9, Project 10. class In Python 3, classes inherit from object automatically, so you don’t need (object) in the first line of the class definition. It’s not an error, but it is superfluous. # Python 2.7 >>> class AddressEntry(object): """ AddressEntry instances hold and manage details of a person """ pass # Python 3 >>> class AddressEntry: # note: no (object) """ AddressEntry instances hold and manage details of a person """ pass Floating point division Python 2 code in the book will work with Python 3. Some changes to floating point is now automatic in Python 3, so the code to change a number into floating point (eg float(2)) is unnecessary. import cPickle as pickle Python 3 uses cPickle by default, so replace import cPickle as pickle by just import pickle. If you try to import cPickle, you’ll get an error. open In Python 3 open() has the same syntax as in Python 2.7, but uses a different way to get data out of the file and into your hands. As a practical matter this means that some Python 2.7 code will sometimes cause problems when run in Python 3. If you run into such a problem (open code that works in Python 2.7 but fails in Python 3), the first thing to try is to add the binary modifier – you’ll need it when reading and writing pickle files for instance. So, instead of ‘r’ or ‘w’ for read and write use ‘rb’ or ‘wb’. #Python2.7 >>> import pickle >>>>> dummy_list = [x*2 for x in range(10)] >>> with open(FILENAME,'w') as file_object: #now dump it! pickle.dump(dummy_list,file_object) >>> # open the raw file to look at what was written >>> with open(FILENAME,'r') as file_object: # change w to r!!! print(file_object.read()) #Python3 >>> import pickle >>>>> dummy_list = [x*2 for x in range(10)] >>> with open(FILENAME,'wb') as file_object: #### note: 'wb' not 'w' pickle.dump(dummy_list,file_object) >>> # open the raw file to look at what was written >>> with open(FILENAME,'rb') as file_object: ##### note 'rb' not 'r' print(file_object.read()) Print – mostly the same, since I used Python 3 print syntax in the book. There is an issue with the print continuation character (trailing comma). That needs to be replaced by an end parameter: #Python 2.7 code: >>>>> while True: ... print(my_message), #<- notice the comma at the end ... #Python 3 >>>>> while True: ... print(my_message, end="") ... If you’re using Python 2.7 code that’s not in my book, it might look like this: #Python 2.7 code: >>> print "Hello World" To make this work in Python 3, you put brackets around what’s to be printed: #Python 3 >>> print("Hello World") raw_input v input In Python 3 replace raw_input by input wherever you see it. Literally, input is simply a new name for raw_input. Range vs xrange The book uses range in anticipation of upgrading to Python 3, so mostly the code will work without changes! If you have code that uses xrange, just rename it to range and all should be well. In one case the code assumed that the output of range is a list (which is is in Python 2.7, but not in Python 3). The code’s syntax was correct, but led to a logical error. That was corrected by choosing a way to test for the end of the loop that didn’t assume a list was involved.
https://python4kids.brendanscott.com/2016/10/19/python-for-kids-python-3-summary-of-changes/
CC-MAIN-2017-26
refinedweb
714
79.5
Occasionally JUnit Rule that provides such a functionality and explains how to use it. To begin with take a look at the following example. It depicts a test case that produces intermittent failures of testB. The reason for this is that the outcome depends on the execution order of all tests due to side effects1. More precisely Display.getDefault() in principle returns a lazily instantiated singleton, while Display.getCurrent() is a simple accessor of this singleton. As a consequence testB fails if it runs after testA2. public class FooTest { @Test public void testA() { Display actual = Display.getDefault(); assertThat( actual ).isNotNull(); } @Test public void testB() { Display actual = Display.getCurrent(); assertThat( actual ).isNull(); } } To avoid some behind-the-scene-magic, which bears the risk to make code less understandable, we could ensure that an existing display is disposed before the actual test execution takes place3. @Before public void setUp() { if( Display.getCurrent() != null ) { Display.getCurrent().dispose(); } } Unfortunately this approach cannot be used within an integration test suite that runs PDE tests for example. The PDE runtime creates a single Display instance, whose lifetime spans all test runs. So display disposal would not be an option and testB would fail within PDE test suite execution all the time4. At this point it is important to remember that the Display singleton is bound to its creation thread (quasi ThreadLocal)5. Because of this testB should run reliable, if executed in its own thread. However thread handling is usually somewhat cumbersome at best and creates a lot of clutter, reducing the readability of test methods. This gave me the idea to create a TestRule implementation that encapsulates the thread handling and keeps the test code clean: public class FooTest { @Rule public RunInThreadRule runInThread = new RunInThreadRule(); @Test public void testA() { Display actual = Display.getDefault(); assertThat( actual ).isNotNull(); } @Test @RunInThread public void testB() { Display actual = Display.getCurrent(); assertThat( actual ).isNull(); } } The RunInThreadRule class allows to run a single test method in its own thread. It takes care of the demon thread creation, test execution, awaiting of thread termination and forwarding of the test outcome to the main thread. To mark a test to be run in a separate thread the test method has to be annotated with @RunInThread as shown above. With this in place testB is now independent from the execution order of the tests and succeeds reliable. But one should be careful not to overuse RunInThreadRule. Although the @RunInThread annotation signals that a test runs in a separate thread it does not explain why. This may easily obfuscate the real scope of such a test. Hence I use this usually only as a last resort solution. E.g. it may be reasonable in case when a third party library relies on an encapsulated ThreadLocal that cannot be cleared or reset by API functionality. For those who like to check out the RunInThreadRule implementation I have created a GitHub gist: For a real-world usage you might also have a look at the PgmResourceBundlePDETest implementation of our Gonsole project hosted at:. - Note that JUnit sorts the test methods in a deterministic, but not predictable, order by default - Consider also the possibility that testAmight be in a other test case and the problem only occurs when running a large suite - Then again I don’t like this kind of practice either, so for a more sophisitcated solution you may have a look at the post A JUnit Rule to Ease SWT Test Setup - In the meanwhile you have probably recognized that the simplistic example test case is not very useful, but I hope it is sufficient to get the motivation explained. - This makes such a thread the user interface thread in SWT. SWT implements a single-threaded UI model often called apartment threading
http://www.javacodegeeks.com/2014/07/a-junit-rule-to-run-a-test-in-its-own-thread.html
CC-MAIN-2014-52
refinedweb
626
54.73
Hi, We needed to patch Gradle source code and until the patch gets into the official repo, we must used our version. In order to distinguish it from the official one, the fourth digit has been added. Unfortunately, building our project using our Gradle distribution throws the following error: Plugin [id: 'com.github.johnrengelman.shadow', version: '1.2.3'] was not found in any of the following sources: - Gradle Core Plugins (plugin is not in 'org.gradle' namespace) - Gradle Central Plugin Repository (2.14.1.1 is not a valid Gradle version number) The plugin is declared using plugins block. The issue could be worked around by reverting to the old approach using buildscript and apply plugin: 'com.github.johnrengelman.shadow'. Is it really necessary to have this check? Thanks, Predrag
https://discuss.gradle.org/t/2-14-1-1-is-not-a-valid-gradle-version-number/20206
CC-MAIN-2018-51
refinedweb
131
58.08
! After installing code blocks,in global compiler settings I found the [-std=c++11] and also C++14 ISO C++ language standard [-std=c++14]. According to instructions I selected [std=c++14] and clicked ok. Now whenever I restart code blocks, the toolbar item called “Build Target” becomes faid in colour hence can't switch between debug & release. Besides this, on restarting, under global compiler settings the [-std=c++14] is now vanished but [-std=c++11] is still there , so I have to check box C++11. On clicking other compiler options under compiler settings [-std=c++14] is showing. Every time when I opens code block the toolbar item called “Build Target” becomes faid or disabled by itself. What is the solution? When you start code::blocks, are you reopening a project? The build target should only be available once you've done so. Is that means if I edit any previous program one or more time then I can't get access to build and release function at the same time. Do I have to make new project again to get build and release function? And what about [-std=c++14] issue? why it vanished and present only in other compiler options tab(under compiler settings)? Once you create a project you should be able to reopen it and continue where you left off. I can't speak to why you're running into these issues, as it appears to be a program or configuration-specific problem. Perhaps ask on a code::blocks specific forum. Another problem: After making the very first program "hello world" when I press build(ctrl+f9) it shows: === Build file: "no target" in "no project" (compiler: unknown) === === Build finished: 0 error(s), 0 warning(s) (0 minute(s), 10 second(s)) === Why "compiler: unknown" is showing in the first line? Although program executes successfully. It sounds like your IDE hasn't been set up properly somehow. If I were you, I'd try uninstalling code::blocks, rebooting, and then try installing again. If you see the same thing, maybe try a different IDE. But I already did that twice, nothing changed. Hello, Im using WinXP, and when I install CodeBlocks I keep getting message/notification "Cant find compiler executable in your configured search paths for GNU GCC compiler" - thats my first problem. Second is that, when I write a program altho i have this compiler problem, i get this message in CodeBlocks "Debugger" section "Error : You need to specify a debugger program in the debuggers´s setting. <<For MinGW compilers, its "gdb.exe" (without the quotes)>> <<For MSVC compilers, its "cdb.exe" (without the quotes) >> Thank You in advance. Did you install the version of Code::Blocks that contains mingw? I'm new to this site and I really don't know how to comment separately in this section because it's only "Reply" word I can see here. Can anyone help me? Scroll to the bottom. There's a comment box and a "Post comment" button there. Oh I never scrolled comments section till last. Thank you. Im 15, and I'm trying to get a head start on C++ and Java because I want to work at Treyarch(cliche I know lol) but this seems as if it requires previous knowledge of C++. Is this guide good for people that know absolutely nothing(me) about coding or software in general? This tutorial does not assume you have any prior programming experience. Everything you need to know, you will be taught along the way. I find Code::Blocks better because that's the one I find the layout more I guess logical "Learn -. Heh- right after i say it's perfect EDT I dum nvm Hello do you still have instruction for visual studio 2015? I just got a new laptop, but I don't know what components of visual studio 2015 i should install on my laptop For Visual Studio 2015, do a Custom Installation and then under programming languages, choose "C++". See this image and this image. Can I use Netbeans for C++? I have seen there is an option for developing C++ programs in Netbeans but the Netbeans Installed in the PC I am using is used for writing Java Programs. Sounds like you can, but you'd have to figure out how to set it up properly. if you ever start coding try this exact code #include <iostream> int main(){cout << "hello\n"; } It just gives me errors, explain pls Which Workloads/Individual components should i install with the new Visual Studio 2017 to be able to follow this tutorial ? I've updated the tutorial to show what to do for Visual Studio 2017. Which program is the best if you're using a Chromebook provided by the school? I love my own personal Chromebook, but unfortunately it's not suitable for learning C++ (or most other computer languages) as it isn't intended to support applications such as compilers, code editors or IDEs. With some effort, some Chromebooks are capable of running Linux which is ideal for learning C++, but your school probably has your Chromebook configured in such a way that it can't be reconfigured in this manner. To learn C++, you don't really need a powerful computer. Pretty much anything able to run a modern version of Windows (7, 8, or 10) or Linux should be fine. For instance, if you are somewhat adventurous, you can get a Raspberry Pi 3, install Raspbian Linux on an SD card and run the Code::Blocks IDE to learn C++. A Raspberry Pi costs less than $40, but you will need to scrounge up a second hand USB keyboard, USB mouse, HDMI monitor and 5 volt power brick to get a complete system running. With some searching around secondhand stores you may be able to put it all together for less than $80. There are tutorials on YouTube and in the Raspberry Pi forums that will guide you through the process of configuring a Raspberry Pi and getting Code::Blocks running. You can also use a web-based compiler as an option of last resort. They're pretty limited, but if that's all you have access to it's certainly better than nothing. Hey Guys just started with these tutorials and are really looking forward to the rest. I just want to know if anyone recommends QT (my IT teacher at school said he will supply me with a copy cuz internet is really shit where I live). Just keep in mind that I am a complete beginner. I am on a windows machine. If you guys recommend QT, do I need to know anything else to use QT or can I just follow the tutorials? QT isn't a compiler, it's a cross-platform application development framework that's used with other compilers. QT is a fantastic framework, but you won't need it for these tutorials. Thanks for IDE tutorial, Going to use xcode. But cannot we use sublime text with some plugin? If Sublime supports plug-in compilers (such as gcc/mingw) then yes, you could use it. But you'll have to figure out how to configure it on your own. C++ 17 is coming guys!! It might also be worth adding that Visual Studio Code is now available for any os... Visual Studio Code is great, and super flexible, but it's not easy to set it up to compile programs. Therefore, I can't in good conscience recommend it to new programmers. I am downloading XCODE for MAC OS. Any specific instructions or settings i shall have to enable? Greetings Hm, is it okay if I just stick with Vim, GDB and the terminal to write/compile/test/debug programs rather than using an IDE? Kinda want to learn how to work with these rather than use an IDE, at least for now. Sure, you're welcome to do so if you can figure out how. Everything we cover in these tutorials should work just fine. Hi. I use Visual Studios and when i try to debug my project through the Local Windows Debugger, I get the message: Unable to start program, the file was not found. I have tried to locate the file manually, but could not find it. I have also tried to look for something named 'compile' but found nothing. Is this the correct way I have tried to start my project, or is there another? ( I have used both the debug and the release configuration to see if it worked, but it came with the same answer ... it would be strange if the file did not exist at all, because it is saved. ) It should just work. Sounds like something didn't get set up correctly. Try recreating your project again. How to download Visual C++. I have already downloaded Visual Studio Community 2015 for my Window 10 laptop. Visual C++ is part of Visual Studio Community 2015. If you didn't enable that option when you installed, there should be a way to enable it from within the application (from within the create new project dialog). My dear c++ Teacher, Please let me say that, recently I found this "ide" online: It looks to me good. Do you suggest me that? With regards and friendship. For compiling simple programs, it seems sufficient. However, as with all web compilers, there is no way to debug your programs, so I would not recommend it for anything non-trivial. My dear c++ Teacher, Please let me say that, unfortunately, it is not sufficient even for simple programs. It does not execute "std::cin" object. However compiles and executes programs with files, apart object "std::cin". With regards and friendship. You appear to be incorrect. I was able to compile and execute the following program just fine: To enter your input, you need to click on the green console area first to ensure it has focus. My dear c++ Teacher, Please let me express my sincere gratitude for you replied and instructed me properly. For sure I am incorrect. I did not find your instruction in "help" or elsewhere. Did you understand it by your experience? With regards and friendship. Yes, I discovered it just by trial and error. i have ide called turbo c++.Is it an ide and will it work for what we will be doing here. and thanks for the website it is awesome Turbo C++ is an IDE, but it is outdated and no longer supported. You'd be much better off updating to a IDE with modern capabilities. Time to embark on the journey ahead. I feel this guide will be very fun to follow. I'll see all of you at the end :) Hi Alex This guide is very good and easy to understand. Well done. My question is that what are the minimum system requirements to program on visual studio express 2013 ? Actually I'm about to go to college to study computer engineering, that's why I want to learm c but can't afford an expensive laptop. I'm not sure on the min spec for Visual Studio 2013, but I'm sure Google could point you in the right direction. If performance is likely to be an issue, you might want to stick with Code::Blocks. Ok, thanks Hello there. This site has been quite helpful to me since i missed classes and our teacher memorises the code and teaches us instead of understanding .. so so. Anyways it won't be an issue if i use turbo c++ to do these programs right? Thanks again for this wonderful tutorial! Cheers! Turbo C++ is a deprecated compiler that does not comply with modern standards. You really should not be using it. You're better off downloading and installing Visual Studio or Code::Blocks instead (if you can). Isn't c++ usually used to make video games? Most major video games are written in C++, but C++ is used for all kinds of different types of programs. ok I have code::blocks set up on ubuntu 16.04 and I'm having a lot of trouble with it. Everytime I try to compile a project it gives the same error "unterminated quoted string" I can't find any information on this. To rule out all possibilities I copied and pasted the code for "Hello world!" and I copy and pasted several other simple programs (inside and outside of this tutorial) as well as manually typing them. I decided to use Geany as my IDE which was working great until I got to section 1.8 where I could not continue because you can't make a project with multiple files. I think I have some setting wrong in code::blocks I copied and pasted many of the simple programs in this tutorial to both IDE's and geany worked every time and code::blocks gave me the same "unterminated quoted string" error. If anyone can help me it would be greatly appreciated I searched comments on here and I've been googling the problem since it started a week or so ago to no avail. I solved the problem above ^ its so easy that I can't believe I didn't think to try it for so long. I was saving my projects in a folder called "Patrick's stuff" and the apostrophe in "Patrick's" was throwing it off. Maybe I missed this in the tutorial but it might be worth mentioning. The only people that I could find with this problem were ubuntu/code::blocks users so it might only affect users with that combination. Glad you figured it out. Hopefully this comment will help other readers who encounter the same situation. Alex: I see my earlier comment about my difficulties with Codeblocks has disappeared. However, with the help of Carl Herolds' video on You Tube I have finally learned how to manipulate Codeblocks. After the installation process is complete and the program is launched the first few screens take you through some housekeeping tasks. Then a gray screen appears. Go to menu bar above and click on View menu. Click on Manager. Go to Workspace pane on left hand panel. Click on main. "Hello World!" program appears in Editor pane. Go to Build menu above and click on Build/Run. Hello World! appears on output screen. Next, delete what you don't need from "Hello World!" and type your program in its' place. Again go to Build menu and click on Build/Run. If you have not goofed up, your program should perform as expected. If not there's a ton of stuff at the end of Chapter 1 of this tutorial about what to do next. Which is where I am at now. One final comment: deleting a bunch of code to get a clean editor seems like a totally Mickey Mouse way to design an IDE! Your other comment was posted in lesson 0.6. It probably makes more sense here, so I'll move it. XCode is taking forever to install, and as I am 13, I am spending my time typing stupid comments on the Internet! Alex: Cheers to you for this great website! I imagine that others have told you that the user interface for Codeblocks 16.01 is different than earlier versions and thus your screen shots are obsolete. In particular, after the first 3 screens you end up with a gray screen. To get the Management pane you must go to the View menu in the Menu Bar at the top of the screen and click on Management. Then, if you entered "Hello world!" as the title of your program, sure enough after clicking on main.cpp under the projects tab you do indeed get the proper code to generate "Hello world!" on your monitor. I have used CB for C coding practice and find it to be very frustrating. For example, if I choose "Console Application" and enter something other than "Hello world!" I have not been able to discover how to get a pure Editor screen. One ploy I have tried is to generate Hello world code, delete that and then write my own code. After that I can debug and compile. After pressing RUN I get one of 2 results: either "Hello world!" appears on the screen or "It appears that your file has not been built yet. Do you wish to build it now?" Clicking on yes leads absolutely nowhere! Clearly I'm doing something wrong. I have found that a better way is to avoid using Console Application altogether. In the file menu choose New and select the File option in the drop down menu. After entering the file path and file name you get the editor for a source file. The downside to this seems to be that you can run the program only once. After that the build menu shuts down. Finally, C coding in Dev.cpp has been less frustrating for me than CB. However, someone wrote a message in the Codeblocks blog saying that the Dev.cpp developer was no longer maintaining the website and they suggested that new users should be aware of the risks before downloading the program. What you describe sounds rather strange, but I'm not familiar with the latest version of Code::Blocks. I'll have to check it out and update the screenshots once I'm back from vacation. I appear to have successfully installed Visual Studio. I'm at the start page and it feels like I'm about to enter a new world! I am going to install Code::Blocks as well, but I need to free up some more drive space first. I want to do everything in the tutorial, for a while anyway, on both IDEs as I think it will give me a better foundation going forward. If this is a complete waste of time I'm open to input! I encourage you to check out both IDEs. But realistically, once you've compiled a few programs with each, there's probably not much to be gained by continuing to do so beyond that point. Makes sense - thanks Alex. I will post some commentary on my newby experience with the two IDEs. Hi Alex, I am from India and I really love your lesson's. I just wanted to know that in our institute we are using Turbo C++3.20 for writing programs in c++. Can you Please is it fine to use it with your tutoriols or I need to use this visual c++ ?? Turbo C++ is very outdated. You would be better off upgrading to a compiler that supports C++11, such as Visual Studio or Code::Blocks. They are both free. If you can not install new software, then perhaps an online compiler would be a viable choice for you (though they make debugging difficult). Our institute is teaching all codes on turbo c++..as i installed code blocks with borland compiler and its running same as turbo c++ codes now. But in you tutuorials i am facing problem 's,since everything is not explained here. I am beginner just started coding now . Its the time to move on right path and with your guidance i think i can fulfill it. Please help! A few thoughts: 1) Many times I'll come back to things later once I've covered other prerequisite topics first. So if you don't see something, keep reading through the lessons. It may be in a future lesson. 2) Analogy time: If you were learning English, I couldn't teach you every word in English. There's just too many words. So I'd focus on giving you fundamentals: key words, plus rules about how to conjugate verbs, that sort of stuff. Once you learn enough of the basics, suddenly you can start learning more by talking to others, by using a dictionary, etc... This tutorial is similar. We won't cover everything in C++ (that would require a 1000 page book, maybe 2000 if there are lots of examples). But I will teach you lots of fundamentals and context. Then you can go out and learn all the other stuff yourself, using supplemental materials. Hi Alex, Thanks for this great collection of C++ concepts and topics you have on your site here. I fell in love with your site after reading your explanation on the distinction between the 'heap' and the 'stack.' By the way, I am a self-taught C++ programmer and have been using the DevC++ IDE to run my programs. But I am disappointed that, for some reason, the current DevC++ IDE I have does NOT have a means to enable the C++11 option!! That is why I'm trying out the Code::Blocks IDE on here. I have successfully downloaded and installed the IDE and even run the 'Hello World' program. However, I couldn't find this crucial option under Settings->Compiler-> "Have g++ follow the C++11 ISO C++ language standard [-std=C++11]" This is the text enclosed in the red rectangle in your literature. Please how can I get this very much needed C++11 option enabled? You may have downloaded the wrong version of Code::Blocks. Have a read of some of the answers here and see if they help. I downloaded the "codeblocks-16.01mingw-setup.exe" version from the Code::Blocks link in your literature. I would think that's the correct one to download. Right? I checked the link you provided in your reply; the post on there is actually the same problem I encountered. That "Have g++ follow the C++11 ISO C++ language standard [-std=C++11]" is curiously missing in the Settings->Compiler settings->Compiler Flags path!! One entry on the StackOverflow site suggests using the " ...[-std=C++0x]" option. I am yet to try that to see how it works out. Yes, the 16.01mingw version should have a C++11 compatibility mode. I'm not sure what else to suggest. Please respond back if you find a solution, so that other readers who encounter this can be helped. My dear c++ Teacher, Please let me say I had same problem with codeblocks-16.01mingw-setup.exe for Windows XP / Vista / 7 / 8.x / 10. I used " …[-std=C++0x]" option but does not work. Hopefully Dev-C++ 4.9.92 works! With regards and friendship. Name (required) Website
https://www.learncpp.com/cpp-tutorial/installing-an-integrated-development-environment-ide/comment-page-6/
CC-MAIN-2019-13
refinedweb
3,769
74.08
You can add an action to a Snackbar, allowing the user to respond to your message. If you add an action to a Snackbar, the Snackbar puts a button next to the message text. The user can trigger your action by pressing the button. For example, an email app might put an undo button on its "email archived" message; if the user clicks the undo button, the app takes the email back out of the archive. Figure 1. This Snackbar has an Undo button, which restores the item that was just removed. To add an action to a Snackbar message, you need to define a listener object that implements the View.OnClickListener interface. The system calls your listener's onClick() method if the user clicks on the message action. For example, this snippet shows a listener for an undo action: public class MyUndoListener implements View.OnClickListener{ @Override public void onClick(View v) { // Code to undo the user's last action } } Use one of the SetAction() methods to attach the listener to your Snackbar. Be sure to attach the listener before you call show(), as shown in this code sample: Snackbar mySnackbar = Snackbar.make(findViewById(R.id.myCoordinatorLayout), R.string.email_archived, Snackbar.LENGTH_SHORT); mySnackbar.setAction(R.string.undo_string, new MyUndoListener()); mySnackbar.show(); Note: A Snackbar automatically goes away after a short time, so you can't count on the user seeing the message or having a chance to press the button. For this reason, you should consider offering an alternate way to perform any Snackbar action.
https://developer.android.com/training/snackbar/action?hl=ja
CC-MAIN-2018-30
refinedweb
255
53
I have the following code: var r = [ ['Pipe repair', '3 Bravo', 'Household Baker'], ['New connection', '5 Delta', 'Household Griffith'], ['Pipe repair', '3 Bravo', 'Household Baker'], ]; r = r.sort(function(a, b) { return (a[0] > b[0]) ? 1: 0; // EDIT: I mistakingly copied newer code than the original code I was testing. However the answer was still on point. // The original code (that worked in Chrome but not Safari or my Rhino environment): // return a[0] > b[0]; }); console.log(r) The callback you pass to .sort() should return: Your callback is basically giving bad answers to the sort mechanism in Safari, so the sort process gets confused. Specifically, your callback returns 0 when the keys are the same and when the second key is less than the first. For comparing strings, you can use .localeCompare() in modern browsers (basically all of them I know of): r = r.sort(function(a, b) { return a[0].localeCompare(b[0]); });
https://codedump.io/share/IjXMRoKVEjD2/1/javascript-sort-callback-not-working-on-all-browsers
CC-MAIN-2018-09
refinedweb
157
66.94
numpy.polymul(a1, a2) The numpy.polymul function finds the product (multiplication) of two polynomials a1 and a2. As an input, use either poly1d objects or one-dimensional sequences of polynomial coefficients. If you use the latter, arange this polynomial sequence naturally from highest to lowest degree. Examples import numpy as np print(np.polymul([1, 2, 3], [2, 3, 4])) # [ 2 7 16 17 12] You can also use poly1d objects: import numpy as np p1 = np.poly1d([1, 2, 3]) p2 = np.poly1d([2, 3, 4]) print(p1) print(p2) print(np.polymul(p1, p2)) ''' 2 1 x + 2 x + 3 2 2 x + 3 x + 4 4 3 2 2 x + 7 x + 16 x + 17 x + 12 ''' As you see the output looks much like a real polynomial if you use poly1d objects. Any master coder has a “hands-on” mentality with a bias towards action. Try it yourself—play with the function in the following interactive code shell: Exercise: Change the parameters of your polynomials. How does the output change? Guess and check! Master NumPy—and become a data science pro:
https://blog.finxter.com/numpy-polymul/
CC-MAIN-2020-45
refinedweb
186
67.86
This article is for beginners. It describes how to create an FreeRTOS project based on MCUXpresso IDE 10.2.1. In this article, I’m using the following: MCUXpresso IDE 10.2.1 , MCUXpresso SDK 2.4.1 for Frdm-k66f board. With Amazon FreeRTOS v10 .You can get it from FRDM-K66 board Before creating a FreeRTOS project, you have to install SDK first. Download the SDK package SDK_2.4.1_FRDM-K66F.zip, drag and drop it into the “Installed SDKs” view. You will be prompted with a dialog asking you to confirm the import –click OK. The SDK will be automatically installed into MCUXpresso IDE part support repository. Go to the ‘QuickStart’ Panel in the bottom left of the MCUXpresso IDE window, and click new project. On the “Board and/or device selection” page, select board frdmk66f. You will see some description relating to the your selection. Click ‘next’… You will see the basic project creation and setting page. The project will be given a default name based on the MCU name. Name the project, select the right device package. Board files: This field allows the automatic selection of a default set of board support files, else empty board files will be created. Project type: Selecting ‘C’ will automatically select Redlib libraries, selecting c++ will select NewllibNaro librarires. Project option: enable semihost will cause the semihost variant of the chosen library to be selected; CMSIS-Core will cause a CMSIS folder containing a variety of support code to be created. OS: For a FreeRTOS project, make sure FreeRTOS is selected. Please select the drivers and utilities according to your requirements. Click ‘next’, you will go to advanced project settings page. This page will take certain default options based on settings from the first wizard project page. Set library type: Please use Redlib for C projects, and NewlibNarno for SDK C++ projects. Next panel allows options to be set related to Input/Output. Hardware settings: set options such as the type of floating point support available/required. MCU C compiler: Set various compiler options Click ‘finish’ will create a simple ‘hello world’ C project for Freedom K66f . Basically does the initialization of the pins, clocks, debug console and peripherals. int main(void) { /*++ ; } return 0 ; } Click the project settings, we can see some basic information of this project, a right click on these nodes provides direct options to edit the associated setting. #include <stdio.h> #include "board.h" #include "peripherals.h" #include "pin_mux.h" #include "clock_config.h" #include "MK66F18.h" #include "fsl_debug_console.h" /* TODO: insert other include files here. */ /* FreeRTOS kernel includes. */ #include "FreeRTOS.h" #include "task.h" /* TODO: insert other definitions and declarations here. */ /* Task priorities. */ #define my_task_PRIORITY (configMAX_PRIORITIES - 1) /******************************************************************************* * Prototypes ******************************************************************************/ static void my_task(void *pvParameters); /*! * @brief Task responsible for printing of "Hello world." message. */ static void my_task(void *pvParameters) { for (;;) { PRINTF("Hello World!\r\n"); vTaskSuspend(NULL); } } /* * @brief Application entry point. */ int main(void) { /* Init board hardware. */ BOARD_InitBootPins(); BOARD_InitBootClocks(); BOARD_InitBootPeripherals(); /* Init FSL debug console. */ BOARD_InitDebugConsole(); if (xTaskCreate(my_task, "my_task", configMINIMAL_STACK_SIZE + 10, NULL, my_task_PRIORITY, NULL) != pdPASS) { PRINTF("Task creation failed!.\r\n"); while (1) ; } vTaskStartScheduler(); /* Enter an infinite loop, just incrementing a counter. */ while(1) { } return 0 ; } Build your application, go to menu Project > Build Project. Alternatively go to the quick start panel and click the hammer button. Go to menu Run> Debug configurations… Select the ‘Debug configuration’ that matches your connection type, in this example Segger Jlink is used. Then click ‘Apply’ and ‘Debug’ Open a terminal, select the appropriate port and set baudrate to 115200 Run the application, you will see “Hello world” in terminal For information about configuring with MCUXpresso pins tool in an FreeRTOS project, please see the following document For information about configuring with MCUXpresso peripheral tool in an FreeRTOS project, please see the following document. A project that is configured to build a simply blink demo will still build all the source files used by the comprehensive demo, even though the simply blink functionality is contained within the single file MyBKExperience. MCUXpresso is really giving me a headache. If I create a new FreeRTOS project for my board (IMX1050 RT), trying to compile the empty project gives me this error: In file included from ../usb/include/usb.h:15:0, from ../usb/host/usb_host.h:12, from ../usb/host/usb_host_framework.c:10: ../osa/usb_osa.h:77:10: fatal error: usb_osa_bm.h: No such file or directory I traced it down to this ifdef in usb_osa.h: /* Include required header file based on RTOS selection */ #if defined(USB_STACK_BM) #include "usb_osa_bm.h" #elif defined(USB_STACK_FREERTOS) #include "usb_osa_freertos.h" But where is USB_STACK_FREERTOS defined? Looking at an RTOS usb example, I discovered it in the XML .cproject file. However, a new RTOS project doesn't correctly include these preprocessor definitions. In this case, I can find and fix the solution, but chasing down these bugs in the IDE is exhausting. Just a heads up.. The SDK examples are defining these defines in the compiler settings, e.g. I hope this helps, Erich
https://community.nxp.com/t5/Kinetis-Software-Development-Kit/How-to-create-an-FreeRTOS-project-with-MCUXpresso-IDE/ta-p/1107710
CC-MAIN-2021-17
refinedweb
836
59.8
Question: I have 2 nested threads. First thread starts multiple instances of second thread. Each second thread has to sleep for some time (5 seconds). I want to start the first thread and return a message to user immediately, but it seems my first thread waits until all the children of second thread to finish. How can I achieve this? Any help? Solution:1 There are some common mistakes when dealing with java.lang.Thread. - Calling runon the thread instead of start. This is nothing magical about the runmethod. - Calling static methods on thread instances. Unfortunately this compiles. A common example is Thread.sleep. sleepis a static method and will always sleep the current thread, even if the code appears to be calling it on a different thread. Rather than dealing with threads directly it is generally better to use a thread pool from java.util.concurrent. Solution:2 What you should probably do, is create a single thread pool via Executors.newCachedThreadPool(). From your main thread submit Runnables (tasks) to the pool. Return to your main thread a list of Futures. In Java there exists enough framework code that one rarely should need to deal with threads directly. Solution:3 It may be helpful to see code. It depends on where you are putting Thread.sleep();. Solution:4 Like someone else has pointed out with Threads you call start() which is a non-blocking call and actually gets the thread rolling. Calling run() will block until the run() method finishes. See the example code below; public class Application { public static void main(String[] args) { FirstThread firstThread = new FirstThread(); firstThread.start(); System.out.println("Main Method ending"); } } public class FirstThread extends Thread { public void run() { for(int i = 0; i < 3; i++) { SecondThread secondThread = new SecondThread(i); secondThread.start(); } System.out.println("FirstThread is finishing"); } } public class SecondThread extends Thread { private int i; public SecondThread(int i) { this.i = i; } public void run() { while(true) { System.out.println("Second thread number " + i + " doing stuff here..."); // Do stuff here... try { Thread.sleep(5000); } catch(InterruptedException ex){ //ignore for sleeping} } } } } Which produces the output: Main Method ending Second thread number 0 doing stuff here... Second thread number 1 doing stuff here... FirstThread is finishing Second thread number 2 doing stuff here... Second thread number 0 doing stuff here... Second thread number 2 doing stuff here... Second thread number 1 doing stuff here... Solution:5 I replaced 'run' with 'start' in both first thread and second thread. It works fine now. Thanks to all who responded with valueble suggestions. Solution:6 public class FirstThread extends Thread { public synchronized void run() { for(int i = 0; i < 3; i++) { System.out.println("i="+i); } System.out.println("FirstThread is finishing"); } } public class SecondThread extends Thread { public synchronized void run() { for(int j = 0; j < 3; i++) { System.out.println("j="+j); } System.out.println("Second Thread is finishing"); } } public class Application { public static void main(String[] args) { FirstThread firstThread = new FirstThread(); SecondThread a=new SecondThread() firstThread.start(); a.start(); } } Output will be: i=0 i=1 i=2 i=3 FirstThread is finishing j=0 j=1 j=2 j=3 Second Thread is finishing Solution:7 You can use the Synchronized keyword which will use to run one thread completely example public synchronized void run() //which thread to run completely { } Note:If u also have question or solution just comment us below or mail us on [email protected] EmoticonEmoticon
http://www.toontricks.com/2018/02/tutorial-how-do-i-make-one-java-thread.html
CC-MAIN-2018-09
refinedweb
576
58.48
Well, in recent times, at least after the introduction of .NET Framework 3.5, the use of Delegates in a program has increased quite a bit. Now almost all people in .NET language must have at least somehow used delegates on their daily programming activities. It might be because of the fact that the use of delegates has been simplified so much with the introduction of lambda expressions in .NET Framework 3.5 and also the flexibility to pass delegates over other libraries for decoupling and inversion of control in applications. Hence, we can say the way of writing code in .NET environment has been changed considerably in recent times. If you want the most simple and somewhat vague idea about delegates, I would say a delegate is actually a reference to a method so that you might use the reference as you use your object reference in your code, you can send the method anywhere in your library or even pass to another assembly for its execution, so that when the delegate is called, the appropriate method body will get executed. Now, to know a more concrete and real life example, I must consider showing you some code: public delegate int mydelegate(int x); public class A { public mydelegate YourMethod { get; set; } public void ExecuteMe(int param) { Console.WriteLine("Starting execution of Method"); if (this.YourMethod != null) { Console.WriteLine("Result from method : {0}", this.YourMethod(param)); } Console.WriteLine("End of execution of Method"); } } static void Main(string[] args) { //int x = 20; //int y = 50; A a1 = new A(); a1.YourMethod = Program.CallMe; a1.ExecuteMe(20); Console.Read(); } public static int CallMe(int x) { return x += 30; } } In the above code, I have explicitly declared a delegate and named it mydelegate. The name of the delegate will indicate that it is a type that can create a reference to a method which can point to a method which has the same signature as defined in it. Clearly, if you quickly go through the code defined above, the property YourMethod can point to a signature which has the same signature as declared to mydelegate. Hence, I can pass a method CallMe easily to Type A, so that when the object of A calls the delegate, it executes the method CallMe. mydelegate YourMethod CallMe CallMe This is very interesting, and has a lot of benefits. Thus to ensure we have strong decoupling between two assemblies, you sometimes need to run a code which must be declared somewhere to the caller (just like event handlers) but the library could call it whenever required. In such a scenario, you might consider the use of delegates comes very handy. As a matter of fact, if you say class A is declared somewhere in base libraries and CallMe is declared in your code, you can easily pass the method to base class library easily. The above code shows few basic rules of declaring a delegate in your code. But this is not the end of the topic. C# programmers want the use of delegate more easily than this. So instead of rotting the code by introducing a method CallMe, we have the flexibility of using anonymous delegates in my code as well. To declare an anonymous delegate, I just need to change: a1.YourMethod = Program.CallMe to: a1.YourCall = delegate(int e) { return e += 30; }; So here I have just written the whole body of CallMe to inline anonymous method construct. You should remember, the concept of anonymous delegate came from .NET Framework 2.0 and it has been widely used in recent times. This is not the end of the topic as well. With the introduction of Lambda expressions, you can even simplify the declaration of a delegate into an expression. Internally, it works the same way while the code will look like an expression. Let's make the declaration a bit simpler using: a1.YourCall = e => e += 30; Here the lambda looks like: e=> e+= 30 e is basically the integer passed to the delegate and the one that comes after => is the body of the return statement. In this way, the way of writing a short method is very easy and super fast. e => return Even with the introduction of Lambda expressions like this, Microsoft has already introduced few generic delegates which can define most of the simple methods of regular use. Action, Action<t>, Action<t1, t2> ..... defines can refer to a method which returns void but take arguments as T1, T2 .... void Similarly, Func<tresult>, Func<t, tresult>, Func<t1, t2, tresult> ..... is for referring to methods which return something in the form of TResult and arguments as T1, T2 .... TResult So instead of writing my own delegate mydelegate, I might better use Func<int, int> to point the same method body. For that, we have to write: Func<int, int> Func<int, int> yourcall = e => e += 30; Hence practically, you need very less delegate declaration in your code, rather you can instantly use these delegate interfaces instantly while you code. The extension methods introduced with IEnumerable and IQueriable used these delegate to let you pass your own delegate construct on them, such that while it runs over the Enumerable will invoke the logic on each element and based on which it produces the output. IEnumerable IQueriable Enumerable To read more about Linq and Lambda expressions, you can refer to my article here. Now as you might have a rough idea now about the usage of delegates and how to declare it in your code, it's time to delve deeper into its actuals. I would use my favourite tool Reflector to quick pick about what it writes in IL when we write our own custom delegate for our application. By nature, a delegate is declared in IL as a class which it derives from System.MulticastDelegate. Following the IL rule, each delegate is a type with abstract implementation of BeginInvoke, EndInvoke and Invoke. System.MulticastDelegate abstract BeginInvoke EndInvoke Invoke So always keep in mind, delegate is ultimately a Type in IL which has special meaning to hold the reference of method into it. To demonstrate the use of a delegate, let's suppose we strip down the above code: static void Main(string[] args) { Func<int,> myfunc = e => e += 30; int result = myfunc(20); Console.WriteLine("Result : {0}", result); Console.Read(); } </int,> Now if you try to see what it actually writes for you in IL, you would be surprised to see the result. Hence, from the above tree, you can see, there are a few things more that have been created in addition to the Main method that you have declared. Main CS$<>9_CachedAnnonymousMethodDelegate1 static Program Program <Main>b_0(int32) Int32 <Main>b_o(int) Invoke Hence, from IL, we can conclude, anonymous delegate is not a feature in terms of IL, it's an adjustment made in C# compiler itself to enable it to generate a type based on the delegate we define, but in terms of language perspective, we can eliminate some portion of code easily which will be handled later on by the compiler itself. Let's test how smart the compiler is, let me introduce a few local variables into the scope. static void Main(string[] args) { int x = 20; int y = 30; int z = 50; Func<int,> myfunc = e => e += (x + y); int result = myfunc(20); Console.WriteLine("Result : {0}", result); Console.Read(); } </int,> Now here we intentionally passed two local variables x and y to the delegate myfunc to check how smart the compiler is to detect the variables, as we already know the compiler generates a normal method for every anonymous delegates. Now if you see the IL, surprisingly, you will come up with a completely new type. The compiler generates a class for you and puts each backing field into it as Data member. x y myfunc Here the <>c_DisplayClass1 represents the type which has been created by the compiler. As in my code, I have used x, and y variable as a part of the delegate, the value of the variables will be created dynamically into the code and hence passed to the method call. The delegate ultimately points to the <Main>b__0 method created within the Type <>c_DisplayClass1. Similarly, to check how smart the C# compiler is, let me create a reference to another method. <>c_DisplayClass1 <Main>b__0 Type <>c_DisplayClass1 static void Main(string[] args) { int x = 20; int y = 30; int z = 50; Func<int,> myfunc = e => e += (x + y); Func<int,> myfunc2 = e => e += (x + z); int result = myfunc(20); Console.WriteLine("Result : {0}", result); Console.Read(); } </int,></int,> Now if you look into the IL, the CompilerGenerated class <>c_DisplayClass1 will also put z into it and also create another method <Main>b__1 into it for 2nd reference. CompilerGenerated class <>c_DisplayClass1 z <Main>b__1 Smart enough, right. Notice, there is special meaning on the Type Name. The name inside angular braces represents the name of the method in which the delegate is declared (in our case, it is Main). An arbitrary name is assigned with a numeric value which indicates the index of each method and hence for unique identification. Type Main Let's point out few facts that we have learnt from the demonstration: System.MulticastDelegate Delegates are very useful while you are about to develop any client - server application or for event based approaches. But delegates most of the times impose compiler generated Types through C# compiler. Hence it is important to keep in mind how the application will behave during actual execution. I hope this post gave you a solid foundation on the so called delegates..
http://www.codeproject.com/Articles/139775/Internals-of-a-Delegate.aspx
CC-MAIN-2015-22
refinedweb
1,611
60.45
The QLocalSocket class provides a local socket. More... #include <QLocalSocket> Inherits QIODevice. This class was introduced in Qt 4.4. The QLocalSocket class provides a local socket. On Windows this is a named pipe and on Unix this is a local domain socket. If an error occurs, socketError(). Note that this feature is not supported on Window 9x. See also QLocalServer. The LocalServerError enumeration represents the errors that can occur. The most recent error can be retrieved through a call to QLocalSocket::error(). This enum describes the different states in which a socket can be. See also QLocalSocket::state(). Creates a new local socket. The parent argument is passed to QObject's constructor. Destroys the socket, closing the connection if necessary. Aborts the current connection and resets the socket. Unlike disconnectFromServer(), this function immediately closes the socket, clearing any pending data in the write buffer. See also disconnectFromServer() and close(). Attempts to make a connection to name. The socket is opened in the given openMode and first enters ConnectingState. It then attempts to connect to the address or addresses returned by the lookup. Finally, if a connection is established, QLocalSocket enters ConnectedState and emits connected(). At any point, the socket can emit error() to signal that an error occurred. See also state(), serverName(), and waitForConnected(). This signal is emitted after connectToServer() has been called and a connection has been successfully established. See also connectToServer() and disconnected(). Attempts to close the socket. If there is pending data waiting to be written, QLocalSocket will enter ClosingState and wait until all data has been written. Eventually, it will enter UnconnectedState and emit the disconnectedFromServer() signal. See also connectToServer(). This signal is emitted when the socket has been disconnected. See also connectToServer(), disconnectFromServer(), abort(), and connected(). Returns the type of error that last occurred. See also state() and errorString().. See also error() and errorString().(). Returns the server path that the socket is connected to. Note: This is platform specific See also connectToServer() and serverName(). Returns true if the socket is valid and ready for use; otherwise returns false. Note: The socket's state must be ConnectedState before reading and writing can occur. See also state().(). Returns the name of the peer as specified by connectToServer(), or an empty QString if connectToServer() has not been called or it failed. See also connectToServer() and fullServerName(). Sets the size of QLocalSocket's internal read buffer to be size bytes. If the buffer size is limited to a certain size, QLocalSocket. See also readBufferSize() and read(). Initializes QLocalSocket local sockets with the same native socket descriptor. See also socketDescriptor(), state(), and openMode(). Returns the native socket descriptor of the QLocalSocket object if this is available; otherwise returns -1. The socket descriptor is not available when QLocalSocket is in UnconnectedState. See also setSocketDescriptor(). Returns the state of the socket. See also error().. See also state(). Waits until the socket is connected, up to msecoServer("market"); if (socket->waitForConnected(1000)) qDebug("Connected!"); If msecs is -1, this function will not time out. See also connectToSerServer(); if (socket->waitForDisconnected(1000)) qDebug("Disconnected!"); If msecs is -1, this function will not time out. See also disconnectFromServer() and close().). Reimplemented from QIODevice. See also waitForBytesWritten().
http://doc.trolltech.com/4.5-snapshot/qlocalsocket.html
crawl-003
refinedweb
532
54.18
On Sun, Mar 6, 2011 at 9:46 AM, Lennart Regebro <regebro at gmail.com> wrote: > I've started working on a little utility to give a quality rating on > packages, expressed in 0-10 points, and also in cheese types, > according to smellyness. > > It's going to check for things like that it has all meta data it > should have, such as author_email, specifies Python versions via the > trove classifiers (currently works) and that it specifies all > dependencies (still todo). It will support both checking on a package > (works currently) a distribution file and PyPI (still to do). > > It's not a uniqe idea, it overlaps with Andreas Jungs > zopyx.trashfinder in scope, and it will also in the case of checking a > package on PyPI check that there are several people that have owner > access, and hence include the functionality of mr.parker. (In fact > when checking on PyPI it will also check if there are documentation on > packages.python.org, that the distribution files are uploaded to PyPI, > etc, but this is all still todo). But I didn't find anything else, and > I wanted bigger scopes than both these in what to check in and which > cases. Reminds me a bit about "CheeseCake" > > But, before I move this to a public repository and upload it to PyPI, > there is one important thing to be determined: What should it be > called? Currently I'm calling it "pypilib.quality". I don't mind this > kind of boring names, but there is currently not a pypilib namespace, > and I don't want to just create top level namespaces left and right > for no reason. So other names are welcome. It doesn't have to have a > namespace either. Camembert ? :D > > In the long run I would not mind to see this utility integrated into a > general pypi/cheeseshop script with other utility commands, which even > could include installing and removing, thusly giving Perl people what > they think they want a "CPAN" for Python. :-) > > -- > Lennart Regebro: > The Python 3 Porting book is out: > +33 661 58 14 64 > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > -- Tarek Ziadé |
https://mail.python.org/pipermail/distutils-sig/2011-March/017409.html
CC-MAIN-2014-10
refinedweb
359
68.2
A Hashtable uses a (Key, Value) pair when storing the data, and uses the Key to find the requested data item. The Key value is immutable , and cant be duplicated in a Hashtable Class. In this tutorial we will create a Song Class, this class will be stored in a Hashtable, and we will use the name of the song for the key. Creating The Class For our Song Class, since it is going to be a simple class with but 2 Propeties, the only Namespace we need to Import is the System Namespace, this is the base of all other Namespaces, to import it do: using System; In this class we will have 2 Properties, one for the path to the song, one for the name of the song. As with all properties, we will need 2 private variables which will hold the values of our Properties for us. The variables are private as we don't want the user to be able to change their value directly. Our 2 properties are Read/Write Properties, meaning they have a get and a set. The Get is used to retrieve the value of the property while the Set is used to set the value of the property: #region Property Variables /// <summary> /// path property variable /// </summary> private string _path; /// <summary> /// name property variable /// </summary> private string _name; #endregion #region Class Properties /// <summary> /// property to hold the path to the song /// </summary> public string Path { get { return _path; } set { _path = value; } } /// <summary> /// property to hold the name of the song /// </summary> public string Name { get { return _name; } set { _name = value; } } #endregion NOTE: You will notice my code is seperated into sections with the #region...#endregion specifier. This allows me to divide similar sections of code, making it easier when searching through my code. Next we will use a Constructor. In C# when you create a class as we are it is assumed by the Framework that a default Constructor is present, in fact if you do not add one, the default empty Constructor is assumed if you will. We are going to add a Constructor though, we will be setting the values of our property variables in our constructor, like so: #region Constructor /// <summary> /// class constructor to set the values of our class properties /// </summary> /// <param name="path">path to the song</param> /// <param name="name">name of the song</param> public Song(string path, string name) { //set the value of the path property _path = path; //set the value of the name property _name = name; } #endregion We have one more method in our sample class. With this method we are going to override the default ToString Method. Doing this will allow us to return the full value of the song, path and name, when we get to retrieving the values of our class. In .Net we are allowed to override base methods, as long as we use the override Modifier. Our new ToString method looks like this: #region ToString /// <summary> /// here we will override the ToString method in the .Net /// Framework so when this class is used and called, ToString() /// will return the path & name of the song /// </summary> /// <returns></returns> public override string ToString() { return _path + "\\" + _name; } #endregion Now that we have our Song Class complete, we will next look at how we can use it, along with the Hashtable Class, to make storing and retrieving data easier and more efficient. Using The Class First thing we will be doing is creating an instance of our Song Class, and a Hashtable Object for holding our Song objects. In this tutorial I am referencing a TreeView which I have populated with my music library, but you do not have to use it in this manner, this is just for demonstration purposes. What I did was I created a global variable which is an instance of the Hashtable Class, and an instance of our Song Class: //Hashtable object to hold our Song objects Hashtable songs = new Hashtable(); //Create an instance of our Song class Song song; I then loop through each checked item in my TreeView Control, and if it has been checked, I create a new Song Object, then add it to my Hashtable, like so: /// <summary> /// method for adding all checked items in the TreeView to /// my Hashtable Object /// </summary> private void AddSongs() { //extensions we want to load string[] extensions = { ".mp3", ".wma", ".mp4", ".wav" }; //now we will loop through all the checked //items in our TreeView foreach(TreeNode items in tvCurrentLists.Nodes) { //make sure the node's CheckBox is checked if (items.Checked == true) { //now we will make sure what we're adding is actually a song //to do this we will loop through our extensions list for (int i = 0; i < extensions.Length; i++) { //now check the value of the current node if (items.FullPath.Contains(extensions[i])) { //since it's a song we can now add it to our //Hashtable, only after we create a Song //Object out of it song = new Song("YourPath", items.FullPath); //now add it to our Hashtable songs.Add(song.Name, song); } } } } } Since we now have our Song Object populated, displaying the values is simply. We will loop through each Song in our Hashtable object, pass in our Key to the indexer, which is the name of the song, then display each song name in a MessageBox, like so: private void cmdDisplay_Click(object sender, EventArgs e) { //now we will use the Hashtable indexer to display the //values of our Hashtable Object foreach(Song item in songs) { //display the value of each song MessageBox.Show(songs[item.Name].ToString()); } } Another way we can achieve the same results would be to use the IDictionaryEnumerator Interface to enumerate through each item in our Hashtable object. To do this we will create an instance of the IDictionaryEnumerator Interface, then use the GetEnumerator Method of the Hashtable Class. To enumerate through the objects in our Hashtable, and to ensure we don't cause any Exceptions by attempting to go past the end of our list, we'll use the MoveNext Method, which equates to true or false depending on the position of the enumerator, like this: private void cmdDisplay_Click(object sender, EventArgs e) { //variable to hold the list of songs string list = string.Empty; //create an instance of the IDictionaryEnumerator Interface IDictionaryEnumerator enumerator; //make sure our Hashtable holds items if (songs.Count > 0) { //let the user know what we're doing MessageBox.Show("The following songs are in your list:"); //now set out IDictionaryEnumerator value enumerator = songs.GetEnumerator(); //now use the MoveNext Method to iterate through our list while (enumerator.MoveNext()) { //keep adding song names until we reach the end list += enumerator.Value.ToString() + "\r\n"; } //now show the song names MessageBox.Show(list); } } That is how you can effectively use the Hashtable Collection in C#. This tutorial isn't meant to be an all inclusive tutorial, there are many more methods available to you in the Hashtable Class, but I am hoping this will give you enough information to get started using the Hashtable Class, and the IDictionary and IDictionaryEnumerator Interfaces. As you can see the Hashtable is an efficient way ot storing and retrieving data in your applications, and uses less overhead than other methods because it uses the (Key, Value) pairs to store the data, thus only requiring the Key to retrieve it's value. Using this method doesn't require traditional search methods. I hope you found this tutorial informative and useful, and thank you for reading Happy Coding!
http://www.dreamincode.net/forums/topic/44097-using-the-c%23-hashtable-collection/
CC-MAIN-2017-04
refinedweb
1,254
59.37
So far, all of our examples only did their work on page load. As you probably guessed, that isn't normal. In most apps, especially the kind of UI-heavy ones we will be building, there is going to be a ton of things the app does only as a reaction to something. That something could be triggered by a mouse click, a key press, window resize, or a whole bunch of other gestures and interactions. The glue that makes all of this possible is something known as events. Now, you probably know all about events from your experience using them in the DOM world. (If you don't, then I suggest getting a quick refresher first.) The way React deals with events is a bit different, and these differences can surprise you in various ways if you aren't paying close attention. Don't worry. That's why you have this tutorial. We will start off with a few simple examples and then gradually look at increasingly more bizarre, complex, and (yes!) boring things. Onwards! OMG! A React Book Written by Kirupa?!! To kick your React skills up a few notches, everything you see here and more (with all its casual clarity!) is available in both paperback and digital editions.BUY ON AMAZON Listening and Reacting to Events The easiest way to learn about events in React is to actually use them, and that's exactly what we are going to! To help with this, we have a simple example made up of a counter that increments each time you click on a button. Initially, our example will look like this: Each time you click on the plus button, the counter value will increase by 1. After clicking the plus button a bunch of times, it will look sorta like this: Under the covers, the way this example works is pretty simple. Each time you click on the button, an event gets fired. We listen for this event and do all sorts of React-ey things to get the counter to update when this event gets overheard. Starting Point To save all of us some time, we aren't going to be creating everything in our example from scratch. By now, you probably have a good idea of how to work with components, styles, state, and so on. Instead, we are going to start off with a partially implemented example that contains everything except the event-related functionality that we are here to learn. First, create a new HTML document and ensure your starting point looks as follows: <!DOCTYPE html> <html> <head> <title>React! React! React!</title> <script src=""></script> <script src=""></script> <script src=""></script> <style> #container { padding: 50px; background-color: #FFF; } </style> </head> <body> <div id="container"></div> <script type="text/babel"> </script> </body> </html> Once your new HTML document looks like what you see above, it's time to add our partially implemented counter example. Inside our script tag below the container div, add the following: var destination = document.querySelector("#container"); var Counter = React.createClass({ render: function() { var textStyle = { fontSize: 72, fontFamily: "sans-serif", color: "#333", fontWeight: "bold" }; return ( <div style={textStyle}> {this.props.display} </div> ); } }); var CounterParent = React.createClass({ getInitialState: function() { return { count: 0 }; }, style={buttonStyle}>+</button> </div> ); } }); ReactDOM.render( <div> <CounterParent/> </div>, destination ); Once you have added all of this, preview everything in your browser to make sure things work. You should see the beginning of our counter. Take a few moments to look at what all of this code does. There shouldn't be anything that looks strange. The only odd thing will be that clicking the plus button won't do anything. We'll fix that right up in the next section. Making the Button Click Do Something Each time we click on the plus button, we want the value of our counter to increase by one. What we need to do is going to roughly look like this: - Listen for the click event on the button. - When a click event is overheard, specify the event handler that will deal with it. - Actually implement the event handler where we increase the value of our this.state.count property that our counter relies on. We'll just go straight down the list...starting with listening for the click event. In React, you listen to an event by specifying everything inline in your JSX itself. More specifically, you specify both the event you are listening for and the event handler that will get called all inside your markup. To do this, find the return function inside our CounterParent component, and make the following highlighted change: . . . return ( <div style={backgroundStyle}> <Counter display={this.state.count}/> <button onClick={this.increase} style={buttonStyle}>+</button> </div> ); What we've done is told React to call the increase function when the onClick event is overheard. Next, let's go ahead and implement the increase function - aka our event handler. Inside our CounterParent component, add the following highlighted lines: var CounterParent = React.createClass({ getInitialState: function() { return { count: 0 }; }, increase: function(e) { this.setState({ count: this.state.count + 1 }); }, onClick={this.increase} style={buttonStyle}>+</button> </div> ); } }); All we are doing with these lines is making sure that each call to the increase function increments the value of our this.state.count property by 1. Because we are dealing with events, your increase function (as the designated event handler) will get access to any event arguments. We have set these arguments to be accessed by e, and you can see that by looking at our increase function's signature (aka what its declaration looks like). We'll talk about the various events and their properties in a little bit. Now, go ahead and preview what you have in your browser. Once everything has loaded, click on the plus button to see all of our newly added code in action. Our counter value should increase with each click! Isn't that pretty awesome? Event Properties As you know, our events pass what are known as event arguments to our event handler. These event arguments contain a bunch of properties that are specific to the type of event you are dealing with. In the regular DOM world, each event has its own type. For example, if you are dealing with a mouse event, your event and its event arguments object will be of type MouseEvent. This MouseEvent object will allow you to access mouse-specific information like which button was pressed or the screen position of the mouse click. Event arguments for a keyboard-related event are of type KeyboardEvent. Your KeyboardEvent object contains properties which (among many other things) allow you to figure out which key was actually pressed. I could go on forever for every other Event type, but you get the point. Each Event type contains its own set of properties that you can access via the event handler for that event! Why am I boring you with things you already know? Well.. Meet Synthetic Events In React, when you specify an event in JSX like we did with onClick, you are not directly dealing with regular DOM events. Instead, you are dealing with a React-specific event type known as a SyntheticEvent. Your event handlers don't get native event arguments of type MouseEvent, KeyboardEvent, etc. They always get event arguments of type SyntheticEvent that wrap your browser's native event instead. What is the fallout of this in our code? Surprisingly not a whole lot. Each SyntheticEvent contains the following properties: These properties should seem pretty straightforward...and generic! The non-generic stuff depends on what type of native event our SyntheticEvent is wrapping. This means that a SyntheticEvent that wraps a MouseEvent will have access to mouse-specific properties such as the following: boolean altKey number button number buttons number clientX number clientY boolean ctrlKey boolean getModifierState(key) boolean metaKey number pageX number pageY DOMEventTarget relatedTarget number screenX number screenY boolean shiftKey Similarly, a SyntheticEvent that wraps a KeyboardEvent will have access to these additional keyboard-related properties: boolean altKey number charCode boolean ctrlKey boolean getModifierState(key) string key number keyCode string locale number location boolean metaKey boolean repeat boolean shiftKey number which In the end, all of this means that you still get the same functionality in the SyntheticEvent world that you had in the vanilla DOM world. Now, here is something I learned the hard way. Don't refer to traditional DOM event documentation when using Synthetic events and their properties. Because the SyntheticEvent wraps your native DOM event, events and their properties may not map one-to-one. Some DOM events don't even exist in React. To avoid running into any issues, if you want to know the name of a Synthetic event or any of its properties, refer to the React Event System document instead. Doing Stuff With Event Properties By now, you've probably seen more about the DOM and Synthetic events than you'd probably like. To wash away the taste of all that text, let's write some code and put all of this new found knowledge to good use. Right now, our counter example increments by one each time you click on the plus button. What we want to do is increment our counter by ten when the Shift key on the keyboard is pressed while clicking the plus button with our mouse. The way we are going to do that is by using the shiftKey property that exists on the SyntheticEvent when using the mouse: boolean altKey number button number buttons number clientX number clientY boolean ctrlKey boolean getModifierState(key) boolean metaKey number pageX number pageY DOMEventTarget relatedTarget number screenX number screenY boolean shiftKey The way this property works is simple. If the Shift key is pressed when this mouse event fires, then the shiftKey property value is true. Otherwise, the shiftKey property value is false. To increment our counter by 10 when the Shift key is pressed, go back to our increase function and make the following highlighted changes: increase: function(e) { var currentCount = this.state.count; if (e.shiftKey) { currentCount += 10; } else { currentCount += 1; } this.setState({ count: currentCount }); }, Once you've made the changes, preview our example in the browser. Each time you click on the plus button, your counter will increment by one just like it had always done. If you click on the plus button with your Shift key pressed, notice that our counter increments by 10 instead. The reason that all of this works is because we change our incrementing behavior depending on whether the Shift key is pressed or not. That is primarily handled by the following lines: if (e.shiftKey) { currentCount += 10; } else { currentCount += 1; } If the shiftKey property on our SyntheticEvent event argument is true, we increment our counter by 10. If the shiftKey value is false, we just increment by 1. More Eventing Shenanigans We are not done yet! Up until this point, we've looked at how to work with events in React in a very simplistic way. In the real world, rarely will things be as direct as what we've seen. Your real apps will be more complex, and because React insists on doing things differently, we'll need to learn (or re-learn) some new event-related tricks and techniques to make our apps work. That's where this section comes in. We are going to look at some common situations you'll run into and how to deal with them. You Can't Directly Listen to Events on Components Let's say your component is nothing more than a button or another type of UI element that users will be interacting with. You can't get away with doing something like what we see in the following highlighted line: var CounterParent = React.createClass({ getInitialState: function() { return { count: 0 }; }, increase: function() { this.setState({ count: this.state.count + 1 }); }, render: function() { return ( <div> <Counter display={this.state.count}/> <PlusButton onClick={this.increase}/> </div> ); } }); On the surface, this line of JSX looks totally valid. When somebody clicks on our PlusButton component, the increase function will get called. In case you are curious, this is what our PlusButton component looks like: var PlusButton = React.createClass({ render: function() { return ( <button> + </button> ); } }); Our PlusButton component doesn't do anything crazy. It only returns a single HTML element! No matter how you slice and dice this, none of this matters. It doesn't matter how simple or obvious the HTML we are returning via a component looks like. You simply can't listen for events on them directly. The reason is because components are wrappers for DOM elements. What does it even mean to listen for an event on a component? Once your component gets unwrapped into DOM elements, does the outer HTML element act as the thing you are listening for the event on? Is it some other element? How do you distinguish between listening for an event and declaring a prop you are listening for? There is no clear answer to any of those questions. It's too harsh to say that the solution is to simply not listen to events on components either. Fortunately, there is a workaround where we treat the event handler as a prop and pass it on to the component. Inside the component, we can then assign the event to a DOM element and set the event handler to the the value of the prop we just passed in. I realize that probably makes no sense, so let's walk through an example. Take a look at the following highlighted line: var CounterParent = React.createClass({ . . . render: function() { return ( <div> <Counter display={this.state.count}/> <PlusButton clickHandler={this.increase}/> </div> ); } }); In this example, we create a property called clickHandler whose value is the increase event handler. Inside our PlusButton component, we can then do something like this: var PlusButton = React.createClass({ render: function() { return ( <button onClick={this.props.clickHandler}> + </button> ); } }); On our button element, we specify the onClick event and set its value to the clickHandler prop. At runtime, this prop gets evaluated as our increase function, and clicking the plus button ensures the increase function gets called. This solves our problem while still allowing our component to participate in all this eventing goodness! Listening to Regular DOM Events If you thought the previous section was a doozy, wait till you see what we have here. Not all DOM events have SyntheticEvent equivalents. It may seem like you can just add the on prefix and capitalize the event you are listening for when specifying it inline in your JSX: var Something = React.createClass({ handleMyEvent: function(e) { // do something }, render: function() { return ( <div myWeirdEvent={this.handleMyEvent}>Hello!</div> ); } }); It doesn't work that way! For those events that aren't officially recognized by React, you have to use the traditional approach that uses addEventListener with a few extra hoops to jump through. Take a look at the following section of code: var Something = React.createClass({ handleMyEvent: function(e) { // do something }, componentDidMount: function() { window.addEventListener("someEvent", this.handleMyEvent); }, componentWillUnmount: function() { window.removeEventListener("someEvent", this.handleMyEvent); }, render: function() { return ( <div>Hello!</div> ); } }); We have our Something component that listens for an event called someEvent. We start listening for this event under the componentDidMount method which is automatically called when our component gets rendered. The way we listen for our event is by using addEventListener and specifying both the event and the event handler to call: var Something = React.createClass({ handleMyEvent: function(e) { // do something }, componentDidMount: function() { window.addEventListener("someEvent", this.handleMyEvent); }, componentWillUnmount: function() { window.removeEventListener("someEvent", this.handleMyEvent); }, render: function() { return ( <div>Hello!</div> ); } }); That should be pretty straightforward. The only other thing you need to keep in mind is removing the event listener when the component is about to be destroyed. To do that, you can use the opposite of the componentDidMount method, the componentWillUnmount method. Inside that method, put your removeEventListener call there to ensure no trace of our event listening takes place after our component goes away. The Meaning of this Inside the Event Handler When dealing with events in React, the value of this inside your event handler is different than what you would normally see in the non-React DOM world. In the non-React world, the value of this inside an event handler refers to the element that fired the event: function doSomething(e) { console.log(this); //button element } var foo = document.querySelector("button"); foo.addEventListener("click", doSomething, false); In the React world (when your components are created using React.createClass), the value of this inside your event handler always refers to the component the event handler lives in: var CounterParent = React.createClass({ getInitialState: function() { return { count: 0 }; }, increase: function(e) { console.log(this); // CounterParent component this.setState({ count: this.state.count + 1 }); }, render: function() { return ( <div> <Counter display={this.state.count}/> <button onClick={this.increase}>+</button> </div> ); } }); In this example, the value of this inside the increase event handler refers to the CounterParent component. It doesn't refer to the element that triggered the event. You get this behavior because React automatically binds all methods inside a component to this. This autobinding behavior only applies when your component is created using React.createClass. If you are using ES6 classes to define your components, the value of this inside your event handler is going to be undefined unless you explicitly bind it yourself: <button onClick={this.increase.bind(this)}>+</button> There is no autobinding magic that happens in ES6, so be sure to keep that in mind if you aren't using React.createClass to create your components. React...why? Why?! Before we call it a day, let's use this time to talk about why React decided to deviate from how we've worked with events in the past. There are two reasons: - Browser Compatibility - Improved Performance Let's elaborate on these two reasons a little bit. Browser Compatibility Event handling is one of those things that works consistently in modern browsers, but once you go back to older browser versions, things get really bad really quickly. By wrapping all of the native events as an object of type SyntheticEvent, React frees you from dealing with event handling quirks that you will end up having to deal with otherwise. Improved Performance In complex UIs, the more event handlers you have, the more memory your app takes up. Manually dealing with that isn't difficult, but it is a bit tedious as you try to group events under a common parent. Sometimes, that just isn't possible. Sometimes, the hassle doesn't outweigh the benefits. What React does is pretty clever. React never attaches event handlers to the DOM elements directly. It uses one event handler at the root of your document that is responsible for listening to all events and calling the appropriate event handler as necessary: This frees you from having to deal with optimizing your event handler-related code yourself. If you've manually had to do that in the past, you can relax knowing that React takes care of that tedious task for you. If you've never had to optimize event handler-related code yourself, consider yourself lucky :P Conclusion You'll spend a lot of time dealing with events, and this tutorial threw a lot of things at you. We started by learning the basics of how to listen to events and specify the event handler. Towards the end, we were all the way in the deep end and looking at eventing corner cases that you will bump into if you aren't careful enough. You don't want to bump into corners. That is never!
https://www.kirupa.com/react/events_in_react.htm
CC-MAIN-2017-09
refinedweb
3,280
64.1
01 March 2010 15:51 [Source: ICIS news] WASHINGTON (ICIS news)--US manufacturing activity increased in February for the seventh straight month, a key survey reported on Monday, although the rate of improvement was not as strong as that seen in January. The Institute for Supply Management (ISM) said its closely watched purchasing managers index (PMI) was at 56.5% in February. That rate of expansion is less than the PMI of 58.4% recorded in January but it nonetheless shows that the broad ?xml:namespace> A PMI reading of 50% or higher indicates that the manufacturing industry is expanding. A reading below 50% means that the sector is contracting. The index is based on a survey of purchasing managers in 18 key manufacturing industries, including chemicals and plastics, with questions on their new orders, production, employment and other measures. In March 2009, the purchasing managers index was at 36.4% and has seen more or less steady improvement since then. The PMI moved above the crucial 50% mark in August last year with a reading of 52.8% and has been above 50% since. The Norbert Ore, the institute’s PMI survey manager, noted that while February’s results show that manufacturers’ new orders and production were not as strong as in January, “they still show significant month-over-month growth”. “Additionally, the employment index is very encouraging, as it is up 2.8 percentage points for the month to 56.1%,” he said. “With these levels of activity, manufacturers are seemingly willing to hire where they have orders to support higher employment.”
http://www.icis.com/Articles/2010/03/01/9338848/us-manufacturing-grows-for-7th-month-in-feb-but-slower.html
CC-MAIN-2014-42
refinedweb
264
56.45
Modules in Python – Types and Examples Like many other programming languages, Python supports modularity. That is, you can break large code into smaller and more manageable pieces. And through modularity, Python supports code reuse. You can import modules in Python into your programs and reuse the code therein as many times as you want. Keeping you updated with latest technology trends, Join TechVidvan on Telegram What are Python Modules? Modules provide us with a way to share reusable functions. A module is simply a “Python file” which contains code we can reuse in multiple Python programs. A module may contain functions, classes, lists, etc. Modules in Python can be of two types: - Built-in Modules. - User-defined Modules. 1. Built-in Modules in Python One of the many superpowers of Python is that it comes with a “rich standard library”. This rich standard library contains lots of built-in modules. Hence, it provides a lot of reusable code. To name a few, Python contains modules like “os”, “sys”, “datetime”, “random”. You can import and use any of the built-in modules whenever you like in your program. (We’ll look at it shortly.) 2. User-Defined Modules in Python Another superpower of Python is that it lets you take things in your own hands. You can create your own functions and classes, put them inside modules and voila! You can now include hundreds of lines of code into any program just by writing a simple import statement. To create a module, just put the code inside a .py file. Let’s create one. # my Python module def greeting(x): print("Hello,", x) Write this code in a file and save the file with the name mypymodule.py. Now we have created our own module. Half of our job is over, now let’s learn how to import these modules. Importing Modules in Python We use the import keyword to import both built-in and user-defined modules in Python. Let’s import our user-defined module from the previous section into our Python shell: >>> import mypymodule To call the greeting function of mypymodule, we simply need to use the dot notation: >>> mypymodule.greeting("Techvidvan") Output Similarly, we can import mypymodule into any Python file and call the greeting function as we did above. Let’s now import a built-in module into our Python shell: >>> import random To call the randint function of random, we simply need to use the dot notation: >>> random.randint(20, 100) Output The randint function of the random module returns a random number between a given range, here (20 to 100). We can import modules in various different ways to make our code more Pythonic. Using import…as statement (Renaming a module) This lets you give a shorter name to a module while using it in your program. >>> import random as r >>> r.randint(20, 100) Output Using from…import statement You can import a specific function, class, or attribute from a module rather than importing the entire module. Follow the syntax below, from <modulename> import <function> >>> from random import randint >>> randint(20, 100) 69 You can also import multiple attributes and functions from a module: >>> from math import pi, sqrt >>> print(3 * pi) 9.42477796076938 >>> print(sqrt(100)) 10.0 >>> Note that while importing from a module in this way, we don’t need to use the dot operator while calling the function or using the attribute. Importing everything from Python module If we need to import everything from a module and we don’t want to use the dot operator, do this: >>> from math import * >>> print(3 * pi) 9.42477796076938 >>> print(sqrt(100)) 10.0 >>> Python dir() function The dir() function will return the names of all the properties and methods present in a module. >>> import random >>> dir(random) ['BPF', 'LOG4', 'NV_MAGICCONST', 'RECIP_BPF', 'Random', 'SG_MAGICCONST', 'SystemRandom', 'TWOPI', '_Sequence', '_Set', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_accumulate', '_acos', '_bisect', '_ceil', '_cos', '_e', '_exp', '_inst', '_log', '_os', '_pi', '_random', '_repeat', '_sha512', '_sin', '_sqrt', '_test', '_test_generator', '_urandom', '_warn', 'betavariate', 'choice', 'choices', '] >>> Reloading Modules in Python If you have already imported a module but need to reload it, use the reload() method. This is intended to be used in cases when you edit a source file of a module and need to test it without leaving Python. In Python 3.0 and above, you need to import imp standard library module to make use of this function. >>> import random >>> import imp >>> imp.reload(random) Wrapping Up! This brings us to the end of our article on modules. Modules are a great deal in programming languages as they give us the ability to reuse code in a much more manageable way. They say, “Python comes with batteries included”. By batteries, they mean modules. Modules are everything you need to get going. All you have to do is code.
https://techvidvan.com/tutorials/modules-in-python/
CC-MAIN-2020-45
refinedweb
813
64.3
Here is a quick script to show how you could capture images to do some responsive testing. The test is super quick and you end up with 4 images that you can look at to see if things are put on the page correctly. I chose some break points that made sense to me. I think if you are using bootstrap your breakpoints might be different. Just change the width to whatever you want to check. from selenium import webdriver from selenium.webdriver.common.by import By import time driver = None def checkit(w,filename): driver.set_window_size(w,HEIGHT) driver.get("") driver.save_screenshot(filename) try: cpath = "e:\\projects\\headless\\chromedriver.exe" driver = webdriver.Chrome(cpath) HEIGHT = 768 checkit(600,"test0.png") checkit(900,"test1.png") checkit(1200,"test2.png") checkit(1800,"test3.png") finally: if driver is not None: driver.quit() ` If you have read the other posts I have done this code is slightly different. I made a function called checkit to make the code more readable. Discussion (0)
https://dev.to/tonetheman/responsive-testing-with-selenium-and-python-56nl
CC-MAIN-2021-39
refinedweb
170
68.97
Issue Type: Bug Created: 2011-02-17T05:48:44.000+0000 Last Updated: 2012-11-16T18:41:30.000+0000 Status: Open Fix version(s): Reporter: Tomas Creemers (tomasc) Assignee: Ralph Schindler (ralph) Tags: - Zend_Db Related issues: Attachments: Zend_Db_Adapter_Abstract specifies all subclasses need to have the isConnected() function. The docblock indicates: "Test if a connection is active". The docblock in Zend_Db_Pdo_Abstract says the same. However, the implementation in Zend_Db_Pdo_Abstract (which is not overridden in any of its subclasses) is as follows: <pre class="highlight"> public function isConnected() { return ((bool) ($this->_connection instanceof PDO)); } This only checks if $this->_connection was at one point a succesfully created PDO object (i.e. at one point there was a connection). It does not test the connection. If the connection drops, the PDO object and $this->_connection do not get unset. Therefore, this function will return true even if the adapter is not connected. Possible fixes: The only way I've found to reliably test a PDO connection is to use it to execute a statement (for example, "SELECT 1"; something even more generic would be even better). If this fails (carefully considering the current PDO::ATTR_ERRMODE-setting of the PDO object), it is an indication the connection might be severed. It is not fool proof, however, because it might also fail on some databases if the user does not have permission to execute select statements. (This does not fail on Mysql, however.) Posted by Tomas Creemers (tomasc) on 2011-02-17T05:53:21.000+0000 added formatting tags around code snippet Posted by Tomas Creemers (tomasc) on 2011-02-17T05:54:15.000+0000 correcting formatting tags for code snippet Posted by Tomas Creemers (tomasc) on 2011-02-17T06:11:46.000+0000 adding additional info to the problems of the proposed fix Posted by Rob Allen (rob) on 2012-11-16T18:41:30.000+0000 I think that the overhead of performing a SQL query against the db within isConnected() is too big a change at this stage in ZF's life.
https://framework.zend.com/issues/browse/ZF-11084
CC-MAIN-2017-30
refinedweb
340
64.2
creating index for xml files - XML creating index for xml files I would like to create an index file... 30-50 records. I would like to retrieve that xml files from the directory one... cases, more than one file may have same name. So, my index file would be like Body Mass Index (BMI) Java: Body Mass Index (BMI) The Body Mass Index program is divided into two files, the main program is in one file, and the GUI and logic are in another. They could also be in one file Read XML in java - XML Read XML in java Hi Deepak, I want to read a xml file which have only one element with multiple attributes with same tag name. here is my file... node,int index) { String nodeName = node.getNodeName(); String XML XML please tell me how i remove one tag out of all similar type of tags in xml xml or attribute names from more than one XML vocabulary. If each vocabulary is given...xml what is name space,xml scema give an example for each XML Namespaces provide a method to avoid element name conflicts.They are used parser xml one page to another parser xml one page to another parser xml one page to another Mapping using XML In this section, you will learn One to One mapping of table in Hibernate using Xml | JAVA DOM Tutorial | XML Tutorial | XML Forms Generator | EAI Articles index XML Schema Schemas is one of its greatest strength. XML Schema has many advantages: 1...; <short-desc>This book is good for beginners in XML. This book helps...;This book is good for beginners in XML. This book helps beginners to learn xml XML Tutorial are far from flexible. XML is a good replacement for EDI. It uses the Internet.... It is the document I would have used to learn XML myself. Prerequisites are a good working... XML Tutorial XML Tutorial: XML stands for EXtensible Markup Spliting Large XML and one by one Moved to Output Folder Spliting Large XML and one by one Moved to Output Folder How to Spliting Large XML and one by one Moved to Output Folder jsp one to one mapping jsp one to one mapping how to perform one to one mapping in jsp....code of one to one mapping with .xml file Mysql Date Index Mysql Date Index Mysql Date Index is used to create a index on specified table. Indexes in database are similar to books in library. It is created on one or more Hibernate 4 One to Many mapping using XML In this section, you will learn how to do one to many mapping of tables in Hibernate using Xml Apply more than one xsl for single XML Apply more than one xsl for single XML How to apply more than one xsl for same xml file Java Programming: Chapter 10 Index is only one possible source of information and only one possible destination... deal with the exception if one occurs. Contents of Chapter 10... Chapter | Previous Chapter | Main Index Java Programming: Chapter 2 Index can't write programs, no matter how good you get at designing their large... | Next Chapter | Previous Chapter | Main Index Java Programming: Chapter 9 Index also look at exceptions, one of the tools that Java provides as an aid in writing... Chapter | Previous Chapter | Main Index Java Programming: Chapter 11 Index to one object can be stored in an instance variable of another object... | Previous Chapter | Main Index Java Programming: Chapter 8 Index in Chapter 11. But there is one type of data structure that is so important.... The items in an array can belong to one of Java's primitive types. They can... [ First Section | Next Chapter | Previous Chapter | Main Index Java Programming: Chapter 4 Index Chapter 4 Programming in the Large I Subroutines ONE WAY TO BREAK UP A COMPLEX PROGRAM into manageable pieces is to use subroutines... | Main Index Flex Skin index ;?xml version="1.0"?> <mx:Application xmlns:mx="http... :- This is the .mxml application code <?xml version="1.0"?> <...; 3.Stateful skin:- you could add transitions between one state Hibernate One-to-many Relationships - One to many example code in Hibernate using the xml file as metadata. Here...; <index column="idx" /> <one-to-many... Hibernate One-to-many Relationships how to create a xml page how to create a xml page <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML...;html <..." content="index,follow" /> <link rel="stylesheet" href="images Java + XML - XML Java + XML 1) I have some XML files, read one xml..." pointing to a hello xml file ..read that file and get the value of the parent... java...the attribute value..not sure now how to read the xml file passing Top 10 Tips for Good Website Design Designing a good website as to come up with all round appreciation, traffic... presence. Good website design tips reflect on the art of mastering the traffic... by the webmaster. Good website design is a continuous process that needs you to evaluate XML Schema XML Schema Good noon. Please inform me how can we send the data from xml fie to java class as an object and Vice-Versa. And wich technology we can use. Thank you JPA One-to-One Relationship JPA One-to-One Relationship  ... will learn about the one-to-one relationship. In the one-to-one relation mapping a single value association to another entity. One-to-One: In one-to-one XML namespace XML namespace hi... please anyone tell me about Can I use the same prefix for more than one XML namespace? Thanks What is Index? What is Index? What is Index Programming: Body Mass Index - Dialog Java NotesProgramming: Body Mass Index - Dialog Name... Mass Index (BMI). BMI is a commonly used formula which shows the relationship... = 2.54 centimeters and one pound is 0.454 kilograms. References XML Interviews Question page23,xml Interviews Guide,xml Interviews XML Interviews Question page23  ... to be different from the prefixes in my DTD? One of the problems with the solution... throughout the document. (This is a good practice anyway, so it's not too much Hibernate One-to-one Relationships Hibernate One-to-one Relationships Hibernate One-to-one Relationships - One to one relationships example using xml meta-data This section we JavaScript array remove by index JavaScript array remove by index  ... described how one can use the pop() method to implement the remove() method... of an element at the specified index position we have created a method removeByIndex(); who C++ - XML whether the insertion was successful. 4) removeItemList(in index:integer, out success:boolean) - deletes the item at a given position (index... was successful. 5) retrieveItemList(in index:integer, out dataItem java program for writing xml file - Java Beginners java program for writing xml file Good morning i want to write values from my database(one table)into one xml file. Like i have 3 coloumns in my table along with their values,so when click some button i need to generate one One to many XML Mapping bag example In this section, you will learn how to use element instead of or element in mapping XML file in Hibernate xml version comparison xml version comparison I have already one xml file, but I get another one xml file in internet,so how to compare the both xml version help to any one Array - Maximum maximum (minimum). There is one common variation on this -- sometimes it isn't the maximum value that is desired, but the index of the maximum.... One solution is to throw an Exception. Another is to change this to return HTML - XML HTML Hi Friend. I am preparing one HTML page. In that page i have four combo boxes. when ever i select the I combo box then automatically... one please send the answer to me... HI friend Converting PDF in to XML Converting PDF in to XML I have to convert PDF into XMl without any loss in text. Please suggest sth good Search index WebTycho Guidelines Java Notes: WebTycho Guidelines Good practices Code Corner Turn on Classroom Awareness Chats sessions Frequent exercises Have students post something... this transformation for you, but TextPad doesn't seem to be one of them Hi good afternoon Hi good afternoon write a java program that Implement an array ADT with following operations: - a. Insert b. Delete c. Number of elements d. Display all elements e. Is Empty XML Schema Types One of the greatest strengths of XML Schemas is its support for data...XML Schema Introduction to XML Schema XML Schema is a W3C Standard HOW TO BECOME A GOOD PROGRAMMER HOW TO BECOME A GOOD PROGRAMMER I want to know how to become good programmer Hi Friend, Please go through the following link: CoreJava Tutorials Here you will get lot of examples with illustration where you can XML Interviews Question page3 XML Interviews Question page3  ... different manufacturers, because it provides only one way of describing your information. XML allows groups of people or organizations to question C.13, create HttpServlet reading XML from HttpRequest - XML HttpServlet reading XML from HttpRequest Hi , My requirement is 1.Input to the HttpServlet will be the XML sent by upstream as HttpRequest 2.HttpServlet has to read XML from HttpRequest and put it in a string turn the GUI(java class) into XML - XML turn the GUI(java class) into XML Hi im trying to turn the GUI(java class) into XML ,means in my gui one combobox is there if i select one... xml one tag is there,in that mapname="somename" active="true" options are there.In Why XML?, Why XML is used for? is another good use of XML. You program can use the XML data file for business...Why XML? In this section we will understand the importance of XML and see Why XML is so important these days. We will also see the different usages of XML XML Parsers XML Parsers XML parser is used to read, update, create and manipulate an XML document. Parsing XML Documents To manipulate an XML document, XML parser is needed. The parser Hibernate Many One One Mapping between two tables. The relationship may be One-To-One, One-To-Many, Many-To-One and Many-To-Many. An Example of One To Many mapping is given below Address... SampleInterfaceImp.java <?xml version='1.0' encoding='utf-8'?> Comparing XML with HTML Comparing XML with HTML XML and HTML are both designed for different purposes... for different types of goals. XML is not at all the replacement of HTML but is complement to html. Here is the list of comparison between XML and HTML given Converting XML to string Converting XML to string Hi Friends, I had an requirement in which a java program receives an xml messages from external web application such as (jsp, php) and then it converts the received xml message into a string. could how to update xml from java - XML . Suppose we have one xml file named "document.xml" document.xml...how to update xml from java hi, Im new to xml parsing and dont know much about. I need to modify the attribute val of a tag in a complex xml file Hibernate One To Mapping Hibernate One To One Mapping Using Annotation Hi If you are use... annotation Consider a relationship between a Student and a Address, where One Student have only one Address and an Address is related to only one student Misc.Online Books ; How to be a Programmer To be a good... vision of a software project is dealing with one's coworkers and customers.... But it is really child's play compared to everything else that a good programmer must clustered and a non-clustered index? clustered and a non-clustered index? What is the difference between clustered and a non-clustered index Parsing XML using Document Builder factory - XML Parsing XML using Document Builder factory Hi , I am new to XML . I am trying to parse a XML file while is : For this i am using Document... = thisChild.getAttribute() I am able to read only one value for vmware_name Read XML using Java Read XML using Java Hi All, Good Morning, I have been working... xml and compare values with DB2 database and generate a report. So First of all i need to read xml using java . i did good research in google and came to know Mysql Btree Index Mysql Btree Index Mysql BTree Index Which tree is implemented to btree index? (Binary tree or Bplus tree or Bminus tree Hibernate Many-to-one Relationships Hibernate Many-to-one Relationships Hibernate Many-to-one Relationships - Many to one relationships example using xml meta-data This current Xml append node problem Xml append node problem print("code sample");Question: I create a XML which looks like this: <cwMin>31</cwMin> The code used in generating this XML is : XMLOutputFactory xof z-index always on top z-index always on top Hi, How to make my div always on top using the z-index property? Thanks Hi, You can use the following code: .mydiv { z-index:9999; } Thanks Programming Style Guideline . If there's one idea that could serve as a guide to good programming..., adding enhancements, adapting it to new systems, etc. That would be a good enough.... Declarations One per line, as Sun suggests string manipulation in xml string manipulation in xml im working on build.xml using apache ant... in one of the targets i am assigning a var file with the full path of .sql files present in a folder. while executing that .sql file in xml its giving me array, index, string array, index, string how can i make dictionary using array...please help XML XML How i remove a tag from xml and update it in my xml XML Elements it by adding one more tag ie..<Subject>in the xml document. So the new modified... XML Elements XML Elements are extensible. They have relationships. They have simple naming Introduction to XML Introduction to XML An xml file consists of various elements. This section presents you a brief introduction to key terminologies used in relation to xml. Here is a sample xml code: .style1 { background-color: #BFDFFF XML: An Introduction a one-piece document. How Can You Use XML? Few Applications of XML Although... XML: An Introduction What is XML? "XML is a cross-platform, software and hardware $_GET[] index is not defined $_GET[] index is not defined Hi, What could be solution of the error: $_GET['someparameter'] index is not defined Thanks Hi, You can use the following code: if (isset($_GET['someparameter'])) { // your code xml xml validate student login using xml for library management system xml xml why the content written in xml is more secure Regular Expression for removing XML Tags Regular Expression for removing XML Tags Hi, Good Afternoon.I want a Regular expression in java which is removing the XML Tags in the specified... after applying regular expression is only "How are you".Need to remove all XML JPA One-to-Many Relationship JPA One-to-Many Relationship  ... the one-to-many relationship and how to develop a one-to-many relation in your JPA Application. One-to-Many: In this relationship each record in Table-A may have checking index in prepared statement checking index in prepared statement If we write as follows: String query = "insert into st_details values(?,?,?)"; PreparedStatement ps = con.prepareStatement(query); then after query has been prepared, can we check the index XML XML create flat file with 20 records. Read the records using xml parser and show required details xml xml how can i remove white space and next line when i copy stream to xml file
http://www.roseindia.net/tutorialhelp/comment/95833
CC-MAIN-2014-23
refinedweb
2,633
63.59
Ok so I have started to use vectors in my game now. Im using one to make my players inventory. I want to have my main code structure of my game in 1 file and all the inventory in another file etc. Now I have wrote out some of my vector for the game Im not sure if to put it into a header file or a cpp file or just put it straight into my int main file...Im not sure if to put it into a header file or a cpp file or just put it straight into my int main file...Code: #include <vector> #include <string> vector<string> inventory; inventory.push_back("Wooden Mallet"); inventory.push_back("Battered Shield"); inventory.push_back("Old Cloak"); vector<string>::iterator myIterator; vector<string>::const_iterator iterate; If I put it into a external file I am not sure how to write it out so it slots in... any suggestions?
https://cboard.cprogramming.com/cplusplus-programming/117025-vectors-printable-thread.html
CC-MAIN-2017-22
refinedweb
155
63.9
score:3 In your example above it seems that the entered nodes have no defined class. Any subsequent d3.selectAll(".name") will return an empty selection and all data elements will show up under .enter() method. You might want to try assigning the corresponding classname every time entering nodes are appended: .enter().append("g").classed("name",true) You might also want to consider using the second argument of .data() to define a unique identifier (key) for each datapoint, ensuring that correct elements are exited on each update, if the order is different. In your code "name" property could probably be used as a key: var person_name = svg.selectAll(".name") .data(data,function(d) { return d.name; }) Finally I notice that you are appending the axes inside the update function. This means a new set of axes will be appended on every update on top of the previous ones. You might want to move those out to the top level. Source: stackoverflow.com Related Query - D3.js exit() not seeming to get updated information - Does the main D3 module namespace not get updated with bleeding edge submodule additions? - Pie chart data not get updated over REST call - Exit not working properly - Where can I get the .geojson file for India and not separate files for each state/territory or any other distinction? - d3 - sunburst - transition given updated data -- trying to animate, not snap - d3.js chart not updated with new data - d3.event.x does not get the x cursor's position - d3.js: How to get event information for drag - Why do I get JavaScript undefined property in Observable but not in HTML? - d3.js ordinal axis is not updated correctly - d3 js exit animation not working properly - D3 force layout nodes attributes not updated properly - Horizontal bar chart exit method not working - Exit selection not working - new data in dataset is not getting updated - Ajax not working when get data based on input for D3 - d3 exit function not removing 1st element of group - d3 re rendering chart exit is not a function - Data bound to children is not updated with new data - Not able to get the Zoom/Pan and States name appear in India Map - D3? - js var does not get value of document.getElementById( ).value - D3 v5: multiline chart, not able to get multiple lines to draw - dc.js heatmap colour range not getting updated after filter applied - Appended text is not updated according to CSV file changes - How to interpolate or get color for the point/values not in specified color table range - How to get updated and rendered value of axis in NVD3.js - d3 bar chart labels not getting updated on updating the chart with the new data - d3.request is not a function. get JSON from API Rest - Click event not updating in React/D3 when data is updated More Query from same tag - extract the count of fields for a month in a year and export it in json for d3.js - How to find the highest attribute in a dataset, then display the "name" associated with that - D3 append element only if element not exist - How to understand isNaN - nvd3 angularjs fixed tooltip - Fail to parse/project coordinates onto JSON - dc-js line chart showing extra filled line at average change - media queries not working on dc.js chart - d3 svg path attr d assignment fails in Jupyter Notebook - Drawing circles via d3js and converting coordinates - text-shadow property in javascript d3 parallel coordinates chart - find target's target i.e friend of friend in force directed graph - Redraw a d3 force layout without moving nodes around - Apply multiple styles with .style() method in D3.js - Hide attached child nodes onClick() - Use minute(of the day) for showing hours in dc js x axis - Setting linkDistance in D3 has no effect - D3 stack layout issues - Possible to visualize neo4j graph in ruby on rails application? - How to implement collision avoidance of points in d3.js v5? - d3 zoom behaviour after target has been translated - Print contents of HTML Object tag with JavaScript - Filtering data for d3.js sankey diagrams - d3.scale.category20b always returning first color - How to control the order of the layers on a map in d3js - Flip D3 Tree Diagram Vertical - D3 Collapsable Tree - Collapse one level at a time - D3.js Brush Controls: getting extent width, coordinates - On Loading D3 Tree Layout chart all nodes should be collapsed under root node - d3 tween for a text return NaN
https://www.appsloveworld.com/d3js/100/24/d3-js-exit-not-seeming-to-get-updated-information
CC-MAIN-2022-40
refinedweb
752
51.48
Hi, I have developed load more functionality in listview and working fine, only issue I am facing that not able to know all data loading completed or data finished consequently my api's hitting(calling api unnecessary) on scroll after completing all data loading. Please suggest me how to sort out this issue Thank you. public class HomeworkFragment :AbsListView.IOnScrollListener { public void OnScroll(AbsListView view, int firstVisibleItem, int visibleItemCount, int totalItemCount) { } public void OnScrollStateChanged(AbsListView view, [GeneratedEnum] ScrollState scrollState) { int threshold = 1; int count = _mListView.Count; if (scrollState == ScrollState.Idle && SISConst.IsHomeworkLoaded==false) { if (_mListView.LastVisiblePosition >= count - threshold) { _pageCount += 1; var task = new LoadBackgroundTask(Activity, _pageCount); task.Execute(); _homeworkAdapter.AddAll(SISConst.Items); } } } } this is separate class public class LoadBackgroundTask : AsyncTask { private Activity _con; private ProgressDialog pbr; int _pageCount; private List<IIsSection> _items; public LoadBackgroundTask(Activity con, int pageCount) { _con = con; _pageCount = pageCount; } protected override void OnPreExecute() { pbr = Utilities.ProgressBar(_con, "b"); pbr.Show(); } protected override Java.Lang.Object DoInBackground(params Java.Lang.Object[] @params) { try { List<StudentHomework> result; _items = new List<IIsSection>(); var sid = SISConst.Mychildren.FirstOrDefault().StudentId; var objHomework = new HomeworkDAL(); result = objHomework.GetHomeWorksForStudentPagesSync(_pageCount.ToString(), "10", sid.ToString()); if (result != null && result.Count > 0) { SISConst.Items = _items; } else { if (result!=null) //this line is the answer of the current thread question SISConst.IsHomeworkLoaded = true; } } catch (Exception ex) { var log = LogService.CreateInstance(); log.MobileLog(ex, SisFragment.UserName); } return null; } } Addall method available in Adapter class public void AddAll(List<IIsSection> scrollitems) { if (scrollitems.Count>0) { if (_items != null) { _items.AddRange(scrollitems); scrollitems.Clear(); SISConst.Items.Clear(); NotifyDataSetChanged(); } } }. Answers Me too have this same issue I am using a button for loading more items, are you using button or automatical loading when end of the list reached? @Sreeee I am not using button just on scroll it appending new items on list. However i have solved this issue You can use activity indicator to know so that till will run until the data is loaded fully Hi @Arvindraja I want to implement the load more feature without the button. I try some codes for detecting the listview end and call the web service if listview end is reached, but not get lucky. My code: In xaml I make the button visibility false and if list hit the bottom make the button visible true.... But button not showing in the UI after reaching the end of list.... Do you have any suggestion? Can you share the code for detecting the end of listview and stop the web service call if all items are loaded? Thanks in advance.... @DarshanJS thank you. If think you are not understood my question. I am asking when listview loaded full data, it should not hit API's. Anyhow I am running/showing progressbar to get next page records. @Sreeee Yeah sure. this is separate class Addall method available in Adapter class. @Arvindraja Thanks for your response. I think your implementation is different from mine. In my project, initially I am loading 20 items in the UI. when the end of the list reached, I am showing a button. On click of that button, a web service call is started for fetching the items from 1 to 40 not from 21 to 40 (adding 20 items to ui on button click). Again onclick of button load the items from 1 to 60. My code : UserTweetResponse is my model class. In my xaml I am using {Binding} property to show the data in UI. Inside the myurl I am adjusting the count values. So in this implementation how can I know if the list reaches the bottom? Is my shared codes are enough to give a solution? Thanks in advance.... Exaclty same I am doing but accessing 10 records. //this is web api calling result = objHomework.GetHomeWorksForStudentPagesSync(_pageCount.ToString(), "10", sid.ToString()); List reaches to bottom for knowing that need to override below method Don't implement it in adapter class. for that you need to override OnScrollStateChanged method. Hi @Arvindraja: How can I access OnScrollStateChanged()? In my project showing errors for OnScrollStateChanged(). For accessing this method, what I need to do first? For accessing OnScrollStateChanged()method you need to extend your activity/Fragment with AbsListView.IOnScrollListenerinterface as i already posted code before. have a look on that Thank you. Thanks for your response I am extending to a contentpage not an activity or fragment. My class already extend contentpage like below: I remove the contentpage and give AbsListView.IOnScrollListener but getting AbsListView not found error @Sreeee You are using XamarinForms, for this lot many tutorials available for the same task, go through those. please post in Xamarin Forms Forum
https://forums.xamarin.com/discussion/108730/how-to-know-load-more-completed-all-data-loaded-in-listview-android
CC-MAIN-2019-13
refinedweb
772
51.55
A connection pool of fixed size. More... #include <Wt/Dbo/FixedSqlConnectionPool> A connection pool of fixed size. This provides a connection pool of fixed size: its size is determined at startup time, and the pool will not grow as more connections are needed. This is adequate when the number of threads (which need different connections to work with) is also bounded, like when using a fixed size thread pool. This is for example the case when used in conjunction with Wt. Note that you do not need as many connections as sessions, since Session will only use a connection while processing a transaction. Creates a fixed connection pool. The pool is initialized with the provided connection, which is cloned ( size - 1) times. The pool thus takes ownership of the given connection.. Implements Wt::Dbo::SqlConnectionPool. Handle a timeout that occured while getting a connection. The default implementation throws an Exception. If the function returns cleanly, it is assumed that something has been done to fix the situation (e.g. connections have been added to the pool): the timeout is reset and another attempt is made to obtain a connection. Returns a connection to the pool. This returns a connection to the pool. This method is called by a Session after a transaction has been finished. Implements Wt::Dbo::SqlConnectionPool. Set a timeout to get a connection. When the connection pool has no available connection, it will wait the given amount of milliseconds. On timeout, handleTimeout() is called, which throws an exception by default. By default, there is no timeout. Get the timeout to get a connection.
http://www.webtoolkit.eu/wt/wt-3.3.8/doc/reference/html/classWt_1_1Dbo_1_1FixedSqlConnectionPool.html
CC-MAIN-2017-51
refinedweb
266
67.45
NeDi network monitoring NeDi (Network Discovery) is an open source network monitoring tool (GNU GPL license) with optional commercial NeDi_support. There is a brief NeDi_flyer. Contents of this page: - NeDi documentation - NeDi installation - Initial configuration of NeDi - Using NeDi - Locating a MAC address - Monitoring specific devices in Devices-List - Discovery Notifications "no working user" - Adjusting alert thresholds - Discarding "Node changed IP and name" events - Notification: did not receive any traffic did not send any traffic - Monitoring specific devices in Nodes-List - Geographical visualization of device locations - Changing a device SNMP community name - Reports and traffic loads - Mapping MAC address to IP address - Receiving SNMP traps NeDi documentation We have found the following NeDi documentation and tutorials: - NeDi_documentation page. - Videos on Youtube. - Video about new features in NeDi 1.5. - Tutorial on drawings maps in NeDi. - Video OSMC 2014: Network Discovery Update | Remo Rickli and the corresponding Slides from OSMC 2014. - Outdated page on SourceForge. - NeDi user forum. NeDi installation For security reasons it is strongly recommended to configure the NeDi). Alternatively, configure the server's firewall rules so that only specific clients can access the web server ports. On the dedicated server for NeDi, download the NeDi tar-ball file nedi-XXX.tgz from the NeDi download page. Paying customers may download the latest version (currently 1.8) from the NeDi_customer page. See also the general NeDi_installation page. NeDi installation and upgrading on CentOS and RHEL Linux To keep this page more general, the installation as well as upgrading of NeDi on CentOS 6/7 and RHEL 6/7 Linux is documented in a separate page: See also the general NeDi_installation page. Initial configuration of NeDi This section explains how to get started initially with setting up your NeDi installation. Start using the NeDi GUI Navigation of the NeDi GUI is explained in the page. NeDi web page login and password Go the the NeDi server web page ( in the above example) and log in as user admin with password admin. Change the admin password immediately: - When logged in, go to the User->Profile page (/User-Profile.php) for the admin user. - In the Status/Password "padlock" fields type in old and new passwords and press the Update button. User profiles You should update the User->Profile page for the admin user, and for any other user which you choose to create, for at least these fields: - SMS text message telephone number (if you want to set up an SMS gateway later). - Time zone. New user accounts are created in the User->Accounts window. Here you can also assign access rights to the users. The default user password is the same as the username. Network Discovery with NeDi Network discovery by means of SNMP is documented in NeDi_documentation. However, NeDi first needs a little configuration. Configuration of nedi.conf The NeDi main configuration file /etc/nedi.conf should first be configured: Set SNMP public and private communities: comm public comm <secret read-write SNMPv2 community> The public is the default SNMP read-only community name, but you may want use a different read-only community in your devices and in NeDi. Only discover devices where ip address matches this regular expression: netfilter ^192\.168\.0|^172\.16 Address where notification emails are sent from: mailfrom [email protected] Add IP addresses, IP ranges or host names to the file /var/nedi/seedlist, for example: 10.13.6.2-64 public myserver.example.com public myprinter.example.com public The 2nd column (public) is the default SNMP read-only community name, but you may want use a different read-only community in your devices and in NeDi. All graphs are generated using RRDtool. In NeDi 1.5 some new features available only in RRDtool 1.4 and higher are by default configured: rrdcmd rrdtool new If you have RRDtool 1.3 or older (CentOS6/RHEL6), remove the new keyword. Running initial device discovery Read the NeDi_documentation page section The First Time. Several options define how your network should be discovered: - -p Use dynamic discovery protocols like CDP or LLDP. - -o Search ARP entries for network equipment vendors matched by ouidev in nedi.conf. - -r Use route table entries of OSI_model Layer 3 devices. A run without any options will result in a plain static discovery using the seedlist file, or the default gateway (router) if you haven’t added any seedlist file entries yet. First use the CLI and the -v option to closely follow the discovery. Please note that -o requires that you define the ouidev parameter in nedi.conf. It seems that this option is only useful if you want to restrict device discovery to certain vendors while avoiding, for example, Cisco devices. Run static discovery (verbose: -v) as the nedi user: su - nedi ./nedi.pl -v When you are satisfied with the result, you may perhaps want to try dynamic discovery: ./nedi.pl -v -p SNMP test with snmpwalk The snmpwalk command is installed by: yum install net-snmp-utils For command options see man snmpcmd. To test that you can read a switch using SNMP use, for example, this command: snmpwalk -Os -c <community-string> -v <protocol-version> <device-address> system For example, on a Linux host test the localhost: snmpwalk -Os -c public -v 2c localhost system For a remote system b307-XXX: snmpwalk -Os -c public -v 2c b307-XXX system Generate device definitions for unknown devices This part is really optional: Some switch devices may show up as grey icons (for example )in the Devices-List.php (Devices->List menu) because they are unknown to NeDi. The solution to this problem has been described in a Defgen_Tutorial video. To configure NeDi device definitions for an unknown device: - Click on the grey device icon to go to the Devices-Status.php page. - In the Summary pane, click on the Edit Def File icon to go to the Other-Defgen.php page. - View the Defgen_Tutorial video Chapter 1 (for a chassis switch go to Chapter 2 at about 19:15 min.). - In the Main pane look at the SysObjId field, below it are "similar" device definitions indicated by icons. Click on one of the closest values to the SysObjId field to load its values into the page, and then follow the Defgen_Tutorial video. - When the page has been completed, click on the Write button to write the device definition file to the server's disk. - Then click on the Discovery icon to make NeDi rediscover the device in question. After the next scheduled NeDi network discovery has been run, all switches of this type should appear correctly in the Devices->List. Contribute device definitions to NeDi As a courtesy to the NeDi community, when you have created and tested device definitions for a hitherto unknown device, please contribute the definition file by E-mail: - In the Other-Defgen.php page which you used before, click on the mail icon to E-mail the definitions to [email protected]. Switch configuration backup Switch configurations can be backed up from the GUI or the CLI. Configurations will be stored in directories under /var/nedi/config/. Using the GUI Devices->List, click on a given device to go to its Devices-Status.php page. In the Summary pane find the Configuration line to view the backup status. To make a new backup click on the Configuration Backup icon . If using the CLI as user nedi the backup command learned from the GUI is: /var/nedi/nedi.pl -v -B0 -SWOAjedibatflowg -a <device-IP> Using NeDi This section explains how to perform some common tasks. Help information for each page is in the Help icon . Locating a MAC address A common and important task is to locate the switch device and port which a particular node MAC address is connected to: - Go to the Nodes->Status page. - Enter the node MAC address and press Show. After a few seconds the requested switch port information for this MAC is displayed. Monitoring specific devices in Devices-List If you want NeDi to generate notification events or alerts for certain devices in the Devices-List use this procedure: - In Devices-List select a set of devices using the upper left selector, for example, Device Type ~ 2530. The press the Show button to display your list. - Verify the device list, and if it's OK press the Monitor button. Now NeDi will begin to monitor events from these devices. - To configure any desired actions on events, go to the Monitoring-Setup page. - In the Monitoring-Setup pane labelled Filter select once more the same devices as in 1., for example, Device Type ~ 2530. - In the Monitor pane select the type of test you want to perform for the selected devices. For example, replace Test-> by ping. Then select the kind of alert desired, for example, replace Alert-> by Mail in order to send E-mails to the logged-in user. - In the Events pane Forward field, select the minimum event level desired for alerts to be sent, for example, replace Level by Warning. - Press the Update button to confirm your changes. Some hints: For network switches, it is better to use Test->uptime in item 5. above, so that you will be alerted when switches are rebooted. In order for E-mails to be sent to you, your E-mail address must be defined in the User-Profile page. Monitoring Test alerts can be generated from the nedi user's CLI (NeDi version 1.4 and above): ./moni.pl -vc200 Discovery Notifications "no working user" Some of your monitored devices may not permit CLI user logins with SSH/telnet (for example, you may not know the password). This may cause Discovery Notifications E-mails complaining about inability to access the device CLI: <device-name> CLI Bridge Fwd error: no working user <device-name> Config backup error: no working user There doesn't seem to be any simple way to configure do not log in to this device. In stead you must modify the device discovery options for the device: - In the Monitoring-Setup page select the device. In the column Events there is an icon called notify (when you hover the mouse over it). Here you enter the following device discovery options: adefijlmnopstw and press the Update button. Verify that the device Discover column now contain your new options. These options explicitly omit the letters b (backup) and c (CLI). The default values are defined in nedi.conf in the Messaging & Monitoring option notify. Hopefully this should eliminate the above CLI warnings. Adjusting alert thresholds In the Monitoring-Setup (NeDi 1.6 and older) or Devices-List (NeDi 1.7 and newer) page you can adjust various alert Threshold values. Click on the Edit threshold icon field in the top pane next to the Show button: - CPU, - Temperature, - ARP_poison (ARP entries per IP to detect poisoning on routers), - Memory, - PoE (Power over Ethernet), - Supply Latency Warning alerts The default network Latency Warning alert is set at 100 milliseconds in nedi.conf: latency-warn 100 Unfortunately, you can't change the latency value for already discovered devices in nedi.conf. The solution to this problem is rather cryptic: - You must go to the Monitoring-Setup page and select which devices to modify. - In the Monitor heading in the top pane there is a field with no icon next to it: If you hover the mouse over this field, a text Latency Warning [ms] is shown. - Click on the field's up-arrow selector to increase the value. Then click on the Update button. Now you should see the new Latency Warning value in the device column under the Statistics heading. Printer supply alerts NeDi reads the printer supply levels (toner etc.) by SNMP from any printer devices monitored. If any supply level is below the notification limit (default value: 5%), an alert will appear in the Discovery Notifications E-mail sent by NeDi. To remove these often superfluous notifications go to the Monitoring-Setup (1.6) or Devices-List (1.7 or newer) page and select the desired printers. Then edit the Supply Alert threshold icon field to insert a value of 0, then press the Update button. Discarding "Node changed IP and name" events If your network has nodes (servers) with multiple IP addresses assigned to a single network interface, NeDi will report (in Monitoring-Events) in every discovery cycle events similar to this: Node <MAC> changed IP to <IP> and name <DNS> where MAC, IP and DNS will be specific to the nodes in question. This is just annoying "noise" which we would like NeDi to discard, because it's perfectly normal. One usage scenario will be multiple tagged VLANs on an interface. To configure a RHEL6/CentOS6 network interface for VLAN, see 9.2.6. Setting Up 802.1q VLAN Tagging (see also Configure an Ethernet interface as a VLAN trunk (Red Hat)). You can force NeDi to discard all such events in the Monitoring-Setup page: - Select all relevant switch devices. - In the Events column Syslog, Trap, Discover icon. - Select Discard and Level=Notice. In the Filter field enter the text: changed IP to - Press the update button. The new filter will be shown in the Events Action column. Notification: did not receive any traffic did not send any traffic We have seen some cases where NeDi discovery sends E-mail notifications similar to: 1) switchA Port LLDP:switchB,port did not receive any traffic did not send any traffic If this doesn't cease, it's actually a problem on one or both switches. The switch port counters have stopped incrementing while traffic is flowing. One can log in to both affected switches and display real-time port counters to determine which switch is at fault. Solution: Reboot the switch with broken port counters. Monitoring specific devices in Nodes-List NeDi can also generate notification events or alerts for nodes in the Nodes-List, in addition to devices in the Devices-List. Example nodes could be: - Switches/routers which do not have or do not permit SNMP Get operations. - Printers without SNMP. - Servers without SNMP. - Other devices such as PCs, cameras or whatever. Use this procedure: - In Nodes-List make a search to uniquely list the node, for example, by its IP address. - Verify the node list, and if it's OK press the Monitor button. - Follow steps 3-7 in the above Devices-List procedure. Geographical visualization of device locations From the NeDi_about page: In order to do that, NeDi needs a certain format in the SNMP Location string as defined in the device's SNMP configuration. The format used by NeDi is: Region;City;Building;Floor;[Room;][Rack;][Rack Unit position;][Height in RUs] (The separator character ; can be modified in nedi.conf with locsep). The building or street address can consist of several sub-buildings with a 2nd level separator (e.g. _). Example: Switzerland;Zurich;Main Station_A;5;DC;Rack 17;7 The resulting device location maps can be viewed in multiple pages: - The Topology-Map page: Click on your Region name, then explore the map down in the City and Building levels. - The Topology-Table and Monitoring-Health pages: Your Buildings will be shown, then explore the Floors and Rooms down to the device level. Location errors will also be shown. In the room view displaying racks, the default number of rack columns is 8. This may be too wide for your browser, so adjust the number of rack columns in your User-Profile page in the field # Columns (0-31). A number of 5 columns may be suitable. There is an instructive Topology_Showcase video, which also describes the use of maplo and nam2loc in nedi.conf. Changing a device SNMP community name If you decide to change a device SNMP community name, for example, the default SNMP read-only public community, the NeDi database must be updated manually, since it doesn't help to reconfigure the nedi.conf or seedlist files with the new community name - updating this information seems to be ignored. You have to run this command for each IP-address whose SNMP community name gets updated: nedi.pl -a <IP-address> -C <new-community-name> -SAFGgadobewitjumpv Reports and traffic loads Network utilization reports To get an overview of the utilization of your subnets, either in terms of number of nodes, or in terms of which IP-addresses are in use, go to the Reports->Networks page. Select either Network Distribution or Network Utilization and click the Show button. Network maps including traffic load Go to the Topology-Map page: - In the Filter pane select the locations and/or devices you want to display. In the Main pane select: - Size&Format: select type png and the image size you want. - Map Type : select Devices and flat In the Layout pane select the Connection Information type you want displayed, for example: - Bandwidth displays link bandwidth. - Link Load displays link load in percent. - Traffic: Small displays small load graphs for the past week. - In the Show pane you can add device IP address, location, etc. Finally press the Show button to generate the network map image. Interface reports (traffic load) To monitor the network traffic load of devices, use the Devices-Interfaces page: - In the Interface-List pane select the Device Name you want to monitor. -. In each column heading there is a triangle/arrow icon: Click the triangle to sort the columns in ascending/descending values. Node reports (traffic load) To monitor the network traffic load of nodes (for example, to find nodes that generate too much traffic), use the Nodes-List page: -: The node names and IP addresses connected to each switch interface is shown. In each column heading there is a triangle/arrow icon: Click the triangle to sort the columns in ascending/descending values. If you want to restrict the node list to a specific switch: - In the Nodes-List pane select the Device Name you want to monitor. Mapping MAC address to IP address Network switches at OSI_model Layer 2 operate only on the Ethernet MAC_address and are in principle ignorant about the IP_address of nodes on the network. Then how may NeDi learn about the IP_address of nodes on the network by speaking only to network devices? Each computer maintains its own table of the mapping from Layer 3 addresses (e.g. IP_address) to Layer 2 addresses (e.g. Ethernet MAC_address). This is called the ARP cache. Your network Router works at the Layer 3 IP_address level and forwards packets between local and remote networks, hence it must have ARP cache information about all its network interfaces. NeDi will read the ARP cache information from your Router and all other SNMP capable devices in your network, and hence NeDi can build up a database of ARP cache information internally and present it to you. In some cases your Router may not contain complete ARP cache information of each and every device, and you need to help NeDi with additional ARP cache data. In this case you first want to run the arpwatch utility described below to accumulate an ARP cache database. It is necessary to configure in nedi.conf: arpwatch /var/lib/arpwatch/arp.dat* Then execute this command: ./nedi.pl -N arpwatch to make NeDi read in your arpwatch database. Check the list of node IP and MAC addresses in the Nodes-List page. If successful, you could run this command regularly (e.g., once per day) from crontab. Note: If your NeDi version is too old (<= 1.5.038) then you must add the argument 0 to the misc::ArpWatch() call in nedi.pl at line 182: if($opt{'N'} =~ /^arpwatch/){ &misc::ArpWatch(0); Using the arpwatch utility For a CentOS 6 Linux server to work with ARP caches, install the arpwatch package and its arpfetch script, as well as some tools in the arp-scan package: yum install arpwatch arp-scan cp -p /usr/share/doc/arpwatch-*/arpfetch /usr/local/bin/ Now you can inquire any SNMP device (in particular your Router) about its ARP cache: arpfetch <IP-address> public where public is just a default SNMP community name (you may be using a different community name). Updating ethercodes.dat Ethernet vendor codes You may perhaps want to update the Ethernet vendor codes in /var/lib/arpwatch/ethercodes.dat (dated 2010) to a more recent version, but unfortunately no up-to-date ethercodes.dat file seems to be available. Update May 2015: Arpwatch ethercodes.dat have now become available from this site: Generating ethercodes.dat from IEEE OUI Data or Nmap MAC Prefixes Updating ethercodes.dat is actually a little involved, since the official IEEE_OUI file has become somewhat inconsistent over the years. In stead it is recommended to download from the Sanitized IEEE OUI Data (oui.txt) page. Another possibility is to use the arp-scan tool get-oui (see man get-oui) The arpwatch requirement is similar to the Nmap MAC Prefixes file, so you can generate ethercodes.dat with these commands: wget --timestamping awk '{ mac = substr($1,1,2) ":" substr($1,3,2) ":" substr($1,5,2); $1=""; printf("%s\t%s\n", mac, $0)}' < nmap-mac-prefixes > /var/lib/arpwatch/ethercodes.dat For automated updating you can create this Makefile: /var/lib/arpwatch/ethercodes.dat: nmap-mac-prefixes awk '{ mac = substr($$1,1,2) ":" substr($$1,3,2) ":" substr($$1,5,2); $$1=""; printf("%s\t%s\n", mac, $$0)}' < $< > $@ nmap-mac-prefixes: FRC wget --timestamping FRC: and run make. Optional: There is also an official IEEE_IAB file (Individual Address Blocks). Each block represents a total of 2^12 (4,096) Ethernet MAC addresses. This file may be downloaded using the arp-scan tool get-iab (see man get-iab). The arpwatch daemon Configure the file /etc/sysconfig/arpwatch, changing the default recipient root to - in order to suppress E-mails: OPTIONS="-u arpwatch -e - -s 'root (Arpwatch)'" Start the arpwatch daemon (also at boot time): chkconfig arpwatch on service arpwatch start ARP cache data will be collected in the files /var/lib/arpwatch/arp.dat*, and those files will be refreshed every 15 minutes by the arpwatch daemon (previous files are renamed with a "-" extension). However, this only works if your server has a single default network interface, such as eth0. If you have multiple network interfaces, you must modify the arpwatch init-script as described in arpwatch on multiple interfaces. Every network interface to be monitored requires a separate instance of the arpwatch daemon. Download an improved arpwatch init-script to replace /etc/rc.d/init.d/arpwatch. For convenience we have attached a copy of the arpwatch-init file. Add a new INTERFACES variable to /etc/sysconfig/arpwatch, for example: INTERFACES="eth0 eth1" Now start the arpwatch service as above. Configure NeDi in nedi.conf to read the ARP cache data: arpwatch /var/lib/arpwatch/arp.dat* arpwatch bugs The arpwatch code is dated around 2006, see the LBL homepage, and therefore has a number of bugs that get fixed by various Linux distributions. One annoying bug is that the arpwatch daemon will report all DHCP lease renewals in the syslog similar to: arpwatch: changed ethernet address 0.0.0.0 0:14:5e:55:70:25 (0:14:5e:55:c2:6a) See this report. To remove this bug the following patch in the arpwatch code db.c added at line 95 seems to do the trick: /* Ignore 0.0.0.0 ip address */ if (a == 0) return (1); The db.c_patch file is attached. Hopefully this patch may be accepted by distributions. See also the Debian bug list for arpwatch. To patch and rebuild the CentOS .src RPM package: rpm -i arpwatch-2.1*.el6.src.rpm (to do) Kernel ARP cache If the number of network devices (cluster nodes plus switches etc.) approaches or exceeds 512, you must consider the Linux kernel's limited dynamic ARP cache size. Please read the man-page man 7 arp about the kernel's ARP cache. Documentation on the net: = 8192 # Tell the gc when to become aggressive with arp table cleaning. # Adjust this based on size of the LAN. net.ipv4.neigh.default.gc_thresh2 = 4096 # Adjust where the gc will leave arp table alone net.ipv4.neigh.default.gc_thresh1 = 2048 # Adjust to arp table gc to clean-up more often net.ipv4.neigh.default.gc_interval = 2000000 # ARP cache entry timeout net.ipv4.neigh.default.gc_stale_time = 2000000 Please change the numbers according to your network size: The value of gc_thresh1 should be greater than the total number of nodes in your network, and the other values gc_thresh2 and gc_thresh3 should be 2 and 4 times gc_thresh1. The values of gc_interval and gc_stale_time (in seconds) should be large enough to retain ARP cache data for a useful period of time (several weeks). Then run /sbin/sysctl -p to reread this configuration file. Receiving SNMP traps Devices can be configured to send SNMP_traps to one or more SNMP servers whenever events occur. An SNMP server can be configured to receive and process such traps, see the tutorial TUT:Configuring_snmptrapd. The NeDi SNMP trap handler is /var/nedi/trap.pl. Configure it as follows: Put this in /etc/snmp/snmptrapd.conf for NeDi to receive traps for the public community: authCommunity log,execute,net public traphandle default /var/nedi/trap.pl # Do not write traps to syslog (will be handled by NeDi trap.pl) doNotLogTraps yes Change the daemon options in file /etc/sysconfig/snmptrapd so that only critical (and higher) traps are logged: OPTIONS="-Ls2d -p /var/run/snmptrapd.pid" See man snmpcmd section LOGGING OPTIONS. Alternatively, snmptrapd may log to a separate syslog file by: OPTIONS="-Lf /var/log/snmptrapd.log -p /var/run/snmptrapd.pid" You must create this logfile and set its SELinux context: touch /var/log/snmptrapd.log chcon --reference=/var/log/messages /var/log/snmptrapd.log Start the service: chkconfig snmptrapd on service snmptrapd start Incoming SNMP_traps will be added to Monitoring-Events. Upon receiving a trap, the script will check whether a device with the source IP is a device monitored by NeDi. The default event level will be set to 50 if the device is in NeDi, otherwise it is set to the low value of 10. Firewall configuration allowing SNMP traps to be received on port 162 must be configured in /etc/sysconfig/iptables: -A INPUT -m state --state NEW -m tcp -p tcp --dport 162 -j ACCEPT -A INPUT -m state --state NEW -m udp -p udp --dport 162 -j ACCEPT and the iptables service restarted. Customizing trap.pl for SNMP trap alerts Please note this comment by the author in trap.pl: The script conaints some basic mappings to further raise authentication and configuration related events. Look at the source, if you want to add more mappings. Trap handling has not been further pursued in favour of syslog messages. Here are some simple customizations of trap.pl which you may find useful: Use level=0 to ignore selected events: if($info =~ s/IF-MIB::ifIndex/Ifchange/){ # We want to ignore interface up/down events $level = 0; ... if ($level > 0) { # $level == 0 means: ignore this event my $mq = &mon::Event(1,$level,'trap',$tgt,$tgt,"$info","$info"); &mon::AlertFlush("NeDi Trap Forward for $tgt",$mq); } Test the trap functionality by sending a test trap, see the SNMP tutorial TUT:snmptrap: snmptrap -v 1 -c public <nedi-server> '1.2.3.4.5.6' '192.193.194.195' 6 99 '55' 1.11.12.13.14.15 s "teststring" Configuring devices to send SNMP traps Devices must be configured explicitly to send SNMP_traps to SNMP servers. In these examples we use the default community public, but you may be using a different community name. The syntax for HP ProCurve switches may be: snmp-server host <IP-of-server> community "public" trap-level not-info snmp-server host <IP-of-server> "public" not-info # Used on some older ProCurve models snmp-server host <IP-of-server> "public" critical # To avoid login/logout traps being sent HP H3C/3Com switches may use this syntax: snmp-agent target-host trap address udp-domain <IP-of-server> params securityname public
https://wiki.fysik.dtu.dk/it/NeDi
CC-MAIN-2018-39
refinedweb
4,703
55.24
Any idea why my log message (which is trying to print the results of History() on a futures contract) returns no data? from datetime import timedelta from collections import deque class TestAlgorithm(QCAlgorithm): def Initialize(self): self.SetStartDate(2017, 8, 2) self.SetEndDate(2017, 8, 3) self.SetCash(25000) # Subscribe and set our expiry filter fr the futures chain # get contracts expring in 15 to 180 days self.soybeans = self.AddFuture(Futures.Grains.Soybeans, Resolution.Hour) self.soybeans.SetFilter(timedelta(15), timedelta(180)) def OnData(self, slice): # only print out data for settlement (which occurs at 2:15 PM ET) if not (self.Time.hour == 14 and self.Time.minute == 15): return data = dict() # iterate over each futures chain we are getting data for for chain in slice.FutureChains: # sort the contracts by open interest (highest to lowest, so reverse the default sort order) # get the first contract in the sorted list to get the month we want data on front = sorted(chain.Value, key = lambda x: x.OpenInterest, reverse=True)[0] self.Log(str(self.History(front.Symbol, 5, Resolution.Hour)))
https://www.quantconnect.com/forum/discussion/2910/history-not-working-with-futures-data/
CC-MAIN-2020-16
refinedweb
182
50.84
I was following a tutorial on how to make a bouncing ball. No problem. But to be sure I understood what I was doing I decided to kick it up a bit and add more balls to it. Which I did fairly easily. Now, however, I want to make them bounce off each other as well as the walls. I'm afraid that I've bitten off more than I can chew and with the plethora of answers out there that I've found. None of them seem to work, I suspect that's because I don't want to use draw to make shapes rather than just use "ball.gif". So here's my code and I'll highlight my attempts to make balls bounce off each other: It's not crazy long so please pardon the amount of code here. - Code: Select all import pygame import random import sys class Ball: def __init__(self, X, Y): self.velocity = [random.randint(-10, 10), random.randint(-10, 10)] self.ball_image = pygame.image.load('ball.gif') self.ball_boundary = self.ball_image.get_rect(center=(X, Y)) if __name__ == '__main__': width = 800 height = 700 MyClock = pygame.time.Clock() background_colour = 0, 0, 0 pygame.init() frame = pygame.display.set_mode((width, height)) pygame.display.set_caption("OH GOD, THE BALLZ!") num_balls = 5 ball_list = [] for i in range(num_balls): ball_list.append(Ball(random.randint(150, 600), random.randint(150, 500))) # so none of the balls spawn in the walls while True: for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit(0) frame.fill(background_colour) for ball in ball_list: # wall collision if ball.ball_boundary.left < 0 or ball.ball_boundary.right > width: ball.velocity[0] = -1 * ball.velocity[0] if ball.ball_boundary.top < 0 or ball.ball_boundary.bottom > height: ball.velocity[1] = -1 * ball.velocity[1] for ball in ball_list: # attempts at ball to ball collision i = 1 if ball.ball_boundary.left > ball_list[i].ball_boundary.right: # first two here check for left and right rect collisions ball.velocity[0] = -1 * ball.velocity[0] if ball.ball_boundary.right > ball_list[i].ball_boundary.left: ball.velocity[0] = -1 * ball.velocity[0] if ball.ball_boundary.top > ball_list[i].ball_boundary.bottom: # these two check for top and bottom collisions ball.velocity[1] = -1 * ball.velocity[1] if ball.ball_boundary.bottom > ball_list[i].ball_boundary.top: ball.velocity[1] = -1 * ball.velocity[1] i += 1 ball.ball_boundary = ball.ball_boundary.move(ball.velocity) frame.blit(ball.ball_image, ball.ball_boundary) pygame.display.flip() MyClock.tick(60) Those checks do work... sort of. When I run the program it seems to assume that the top, left, right, bottom span the entirety of the window and not just the area of the balls themselves. Suggestions? -Wommbatt
http://www.python-forum.org/viewtopic.php?p=12107
CC-MAIN-2016-44
refinedweb
449
54.49
Parse::RecDescent::Topiary - tree surgery for Parse::RecDescent autotrees use Parse::RecDescent::Topiary; my $parser = Parse::RecDescent->new($grammar); ... my $tree = topiary( tree => $parser->mainrule, namespace => 'MyModule::Foo', ucfirst => 1 ); Parse::RecDescent has a mechanism for automatically generating parse trees. What this does is to bless each resulting node into a package namespace corresponding to the rule. This might not be desirable, for a couple of reasons: newfor each class. A base class, Parse::RecDescent::Topiary::Base is provided in the distribution, to construct hashref style objects. The user can always supply their own - inside out or whatever. topiary This is a function which recursively rebuilds an autotree returned by Parse::RecDescent, using constructors for each node. This exported function takes a list of option / value pairs: tree Pass in the resulting autotree returned by a Parse::RecDescent object. namespace If not specified, topiary will not use objects in the new parse tree. This can be specified either as a single prefix value, or a list of namespaces as an arrayref. As the tree is walked, each blessed node is used to form a candidate class name, and if such a candidate class has a constructor, i.e. if Foo::Bar::Token->can('new') returns true, this will be used to construct the new node object (see delegation_class). If a list of namespaces are given, each one is tried in turn, until a new method is found. If no constructor is found, the node is built as a data structure, i.e. it is not blessed or constructed. ucfirst Optional flag to upper case the first character of the rule when forming the class name. consolidate Optional flag that causes topiary to reduce the nesting, unambiguously, of optionally quantified productions. The production foo(?) causes generation of the hash entry 'foo(?)' containing an arrayref of either 0 or 1 elements depending whether foo was present or not in the input string. If consolidate is a true value, topiary processes this entry, and either generates a hash entry foo => foo_object if foo was present, or does not generate a hash entry if it was absent. args Optional user arguments passed in. These are available to the constructors, and the default constructor will put them into the new objects as $self->{__ARGS__}. delegation_class @class_list = qw(Foo::Bar Foo::Baz); my $class = delegation_class( 'Dongle', \@class_list, 'wiggle' ); This subroutine is not exported by default, and is used internally by topiary. $class is set to Foo::Bar::Dongle if Foo::Bar::Dongle->can('wiggle') or set to Foo::Baz::Dongle if Foo::Baz::Dongle->can('wiggle') or return undef if no match is found. Please report bugs to Ivor Williams CPAN ID: IVORW [email protected] This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. The full text of the license can be found in the LICENSE file included with this module.
http://search.cpan.org/~ivorw/Parse-RecDescent-Topiary/lib/Parse/RecDescent/Topiary.pm
CC-MAIN-2016-26
refinedweb
487
55.34
Get the highlights in your inbox every week. Loop better: A deeper look at iteration in Python Loop better: A deeper look at iteration in Python Dive into Python's for loops to take a look at how they work under the hood and why they work the way they do. Subscribe now: >>> numbers = [1, 2, 3, 5, 7] >>> squares = (n**2 for n in numbers) We can pass our generator object to the tuple constructor to make a tuple out of it: >>> tuple(squares) (1, 4, 9, 25, 49) If we then take the same generator object and pass it to the sum function, we might expect that we'd get the sum of these numbers, which would be 88. >>> sum(squares) 0 Instead we get 0. Gotcha 2: Containment checking Let's take the same list of numbers and the same generator object: >>> numbers = [1, 2, 3, 5, 7] >>> squares = (n**2 for n in numbers) If we ask whether 9 is in our squares generator, Python will tell us that 9 is in squares. But if we ask the same question again, Python will tell us that 9 is not in squares. >>> 9 in squares True >>> 9 in squares False We asked the same question twice and Python gave us two different answers. Gotcha 3: Unpacking This dictionary has two key-value pairs: >>> counts = {'apples': 2, 'oranges': 1} Let's unpack this dictionary using multiple assignment: >>> x, y = counts You might expect that when unpacking this dictionary, we'll get key-value pairs or maybe.We'll come back to these gotchas after we've learned a bit about the logic that powers these Python snippets.>>> x 'apples' Review: Python's for loop Python doesn't have traditional for loops. To explain what I mean, let's take a look at a for loop in another programming language. This is a traditional C-style for loop written in JavaScript: let numbers = [1, 2, 3, 5, 7]; for (let i = 0; i < numbers.length; i += 1) { print(numbers[i]) } JavaScript, C, C++, Java, PHP, and a whole bunch of other programming languages all have this kind of for loop. But Python does not. Python does not have traditional C-style for loops. We do have something that we call a for loop in Python, but it works like a foreach loop. This is Python's flavor of for loop: numbers = [1, 2, 3, 5, 7] for n in numbers: print(n). An iterable is anything you can loop over with a for loop in Python. Iterables can be looped over, and anything that can be looped over is an iterable. for item in some_iterable: print(item) Sequences are a very common type of iterable. Lists, tuples, and strings are all sequences. >>> numbers = [1, 2, 3, 5, 7] >>> coordinates = (4, 5, 7) >>> words = "hello there" Sequences are iterables that have a specific set of features. They can be indexed starting from 0 and ending at one less than the length of the sequence, they have a length, and they can be sliced. Lists, tuples, strings, and all other sequences work this way. >>> numbers[0] 1 >>> coordinates[2] 7 >>> words[4] 'o' Lots of things in Python are iterables, but not all iterables are sequences. Sets, dictionaries, files, and generators are all iterables but none of these things are sequences. >>> my_set = {1, 2, 3} >>> my_dict = {'k1': 'v1', 'k2': 'v2'} >>> my_file = open('some_file.txt') >>> squares = (n**2 for n in my_set): numbers = [1, 2, 3, 5, 7] i = 0 while i < len(numbers): print(numbers[i]) i += 1 This works for lists, but it won't work everything. This way of looping only works for sequences. If we try to manually loop over a set using indexes, we'll get an error: >>> fruits = {'lemon', 'apple', 'orange', 'watermelon'} >>> i = 0 >>> while i < len(fruits): ... print(fruits[i]) ... i += 1 ... Traceback (most recent call last): File "<stdin>", line 2, in <module> TypeError: 'set' object does not support indexing. >>> numbers = {1, 2, 3, 5, 7} >>> coordinates = (4, 5, 7) >>> words = "hello there" We can ask each of these iterables for an iterator using Python's built-in iter function. Passing an iterable to the iter function will always give us back an iterator, no matter what type of iterable we're working with. >>> iter(numbers) <set_iterator object at 0x7f2b9271c860> >>> iter(coordinates) <tuple_iterator object at 0x7f2b9271ce80> >>> iter(words) <str_iterator object at 0x7f2b9271c860> Once we have an iterator, the one thing we can do with it is get its next item by passing it to the built-in next function. >>> numbers = [1, 2, 3] >>> my_iterator = iter(numbers) >>> next(my_iterator) 1 >>> next(my_iterator) 2 Iterators are stateful, meaning once you've consumed an item from them, it's gone. If you ask for the next item from an iterator and there are no more items, you'll get a StopIteration exception: >>> next(my_iterator) 3 >>> next(my_iterator) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration</module></stdin> So you can get an iterator from every iterable. The only thing you can do with iterators is ask them for their next item using the next function. And if you pass them to next but they don't have a next item, a StopIteration exception will be raised. You can think of iterators as'll try to manually loop over an iterable without using a for loop. We'll do so by attempting to turn this for loop into a while loop: def funky_for_loop(iterable, action_to_do): for item in iterable: action_to_do(item) def funky_for_loop(iterable, action_to_do): iterator = iter(iterable) done_looping = False while not done_looping: try: item = next(iterator) except StopIteration: done_looping = True else: action_to_do(item) We've just reinvented): for n in numbers: print(n) Multiple assignment also uses the iterator protocol: x, y, z = coordinates Star expressions use the iterator protocol: a, b, *rest = numbers print(*numbers) And many built-in functions rely on the iterator protocol: unique_numbers = set(numbers) Anything in Python that works with an iterable probably uses the iterator protocol in some way. Anytime you're looping over an iterable in Python, you're relying on the iterator protocol. Generators are iterators So you might be thinking: Iterators seem cool, but they also just seem like an implementation detail and we, as users of Python, might not need to care about them. I have news for you: It's very common to work directly with iterators in Python. The squares object here is a generator: >>> numbers = [1, 2, 3] >>> squares = (n**2 for n in numbers) And generators are iterators, meaning you can call next on a generator to get its next item: >>> next(squares) 1 >>> next(squares) 4 But if you've ever used a generator before, you probably know that you can also loop over generators: >>> squares = (n**2 for n in numbers) >>> for n in squares: ... print(n) ... 1 4 9: >>> numbers = [1, 2, 3] >>> iterator1 = iter(numbers) >>> iterator2 = iter(iterator1) Remember that iterables give us iterators when we call iter on them. When we call iter on an iterator it will always give us itself back: >>> iterator1 is iterator2 True Iterators are iterables and all iterators are their own iterators. def is_iterator(iterable): return iter(iterable) is iterable: >>> numbers = [1, 2, 3, 5, 7] >>> iterator = iter(numbers) >>> len(iterator) TypeError: object of type 'list_iterator' has no len() >>> iterator[0] TypeError: 'list_iterator' object is not subscriptable From our perspective as Python programmers, the only useful things you can do with an iterator are to pass it to the built-in next function or to loop over it: >>> next(iterator) 1 >>> list(iterator) [2, 3, 5, 7] And if we loop over an iterator a second time, we'll get nothing back: >>> list(iterator) [] You can think of iterators as lazy iterables that are single-use, meaning they can be looped over one time only. As you can see in the truth table below, holds. >>> letters = ['a', 'b', 'c'] >>> e = enumerate(letters) >>> e <enumerate object at 0x7f112b0e6510> >>> next(e) (0, 'a') In Python 3, zip, map, and filter objects are iterators too. >>> numbers = [1, 2, 3, 5, 7] >>> letters = ['a', 'b', 'c'] >>> z = zip(numbers, letters) >>> z <zip object at 0x7f112cc6ce48> >>> next(z) (1, 'a') And file objects in Python are iterators also. >>> next(open('hello.txt')) 'hello world\n' There are lots of iterators built into. class square_all: def __init__(self, numbers): self.numbers = iter(numbers) def __next__(self): return next(self.numbers) ** 2 def __iter__(self): return self But no work will be done until we start looping over an instance of this class. Here we have an infinitely long iterable count and you can see that square_all accepts count without fully looping over this infinitely long iterable: >>> from itertools import count >>> numbers = count(5) >>> squares = square_all(numbers) >>> next(squares) 25 >>> next(squares) 36 This iterator class works, but we don't usually make iterators this way. Usually when we want to make a custom iterator, we make a generator function: def square_all(numbers): for n in numbers: yield n**2 This generator function is equivalent to the class we made above, and it works essentially the same way. That yield statement probably seems magical, but it is very powerful: yield allows us to put our generator function on pause between calls from the next function. The yield statement is the thing that separates generator functions from regular functions. Another way we could implement this same iterator is with a generator expression. def square_all(numbers): return (n**2 for n in numbers): hours_worked = 0 for event in events: if event.is_billable(): hours_worked += event.duration Here is code that does the same thing by using a generator expression for lazy evaluation: billable_times = ( event.duration for event in events if event.is_billable() ) hours_worked = sum(billable_times) 10 lines of a log file: for i, line in enumerate(log_file): if i >= 10: break print(line) This code does the same thing, but we're using the itertools.islice function to lazily grab the first 10 lines of our file as we loop: from itertools import islice first_ten_lines = islice(log_file, 10) for line in first_ten_lines: print(line). current = readings[0] for next_item in readings[1:]: differences.append(next_item - current) current = next_item Notice that this code has an extra variable that we need to assign each time we loop. Also note that this code works only: def with_next(iterable): """Yield (current, next_item) tuples for each item in iterable.""" iterator = iter(iterable) current = next(iterator) for next_item in iterator: yield current, next_item current = next_item We're manually getting an iterator from our iterable, calling next on it to grab the first item, then looping over our iterator to get all subsequent items, keeping track of our last item along the way. This function works not just with sequences, but with any type of iterable. This is the same code as before, but we're using our helper function instead of manually keeping track of next_item: differences = [] for current, next_item in with_next(readings): differences.append(next_item - current). differences = [ (next_item - current) for current, next_item in with_next(readings) ] Looping gotchas revisited Now we're ready to jump back to those odd examples we saw earlier and try to figure out what was going on. Gotcha 1: Exhausting an iterator Here we have a generator object, squares: >>> numbers = [1, 2, 3, 5, 7] >>> squares = (n**2 for n in numbers) If we pass this generator to the tuple constructor, we'll get a tuple of its items back: >>> numbers = [1, 2, 3, 5, 7] >>> squares = (n**2 for n in numbers) >>> tuple(squares) (1, 4, 9, 25, 49) If we then try to compute the sum of the numbers in this generator, we'll get 0: >>> sum(squares) 0 This generator is now empty: we've exhausted it. If we try to make a tuple out of it again, we'll get an empty tuple: >>> tuple(squares) () Generators are iterators. And iterators are single-use iterables. They're like Hello Kitty Pez dispensers that cannot be reloaded. Gotcha 2: Partially consuming an iterator Again we have a generator object, squares: >>> numbers = [1, 2, 3, 5, 7] >>> squares = (n**2 for n in numbers) If we ask whether 9 is in this squares generator, we'll get True: >>> 9 in squares True But if we ask the same question again, we'll get False: >>> 9 in squares False When we ask whether 9 is in this generator, Python has to loop over this generator to find 9. If we kept looping over it after checking for 9, we'll only get the last two numbers because we've already consumed the numbers before this point: >>> numbers = [1, 2, 3, 5, 7] >>> squares = (n**2 for n in numbers) >>> 9 in squares True >>> list(squares) [25, 49] Asking whether something is contained in an iterator will partially consume the iterator. There is no way to know whether something is in an iterator without starting to loop over it. Gotcha 3: Unpacking is iteration When you loop over dictionaries you get keys: >>> counts = {'apples': 2, 'oranges': 1} >>> for key in counts: ... print(key) ... apples oranges You also get keys when you unpack a dictionary: >>> x, y = counts >>> x, y ('apples', 'oranges') related articles and videos I recommend: - Loop Like a Native, Ned Batchelder's PyCon 2013 talk - Loop Better, the talk this article is based on - The Iterator Protocol: How ForLoops Work, a short article I wrote on the iterator protocol - Comprehensible Comprehensions, my talk on comprehensions and generator expressions -. For more content like this, attend PYCON, which will be held May 9-17, 2018, in Columbus, Ohio. 4 Comments, Register or Log in to post a comment. I can understand something about pointing these issues out, but it seems in the process of making a long and repetitive article, you've only added to confusion. C++ isn't Perl isn't Python. You have to get a mindset of how each language approaches various logical situations, and not try to translate Perl to Python or vice versa. You also have to create code you can put aside and quickly understand when you pull it out a year or two later. Interesting topic. I got lost about what a "pez dispenser" is, but the article was informative. Thansk Thanks for article - Learnt a WHOLE lot as I never really understood iterators in Python. Using the Pez dispensers as a metaphor shows your age ;-) Thank you so much for writing this article! You've clearly explained a lot of things that I only had vague ideas about beforehand. This was very useful. I'm new to Python and it is sometimes hard to find explanations that don't assume the reader is an experienced developer. Thanks again!
https://opensource.com/article/18/3/loop-better-deeper-look-iteration-python
CC-MAIN-2021-17
refinedweb
2,464
54.86
link, linkat - link one file to another file #include <unistd.h> int link(const char *path1, const char *path2); [OH] #include <fcntl.h>#include <fcntl.h> int linkat(int fd1, const char *path1, int fd2, const char *path2, int flag);. - ] - [OB XSR] path1 refers to a named STREAM-2017., symlink, unlink XBD <fcntl.h>, . Austin Group Interpretation 1003.1-2001 #143 is applied. SD5-XSH-ERN-93 is applied, adding RATIONALE. The linkat() function is added from The Open Group Technical Standard, 2006, Extended API Set Part 2. Functionality relating to XSI STREAMS is marked obsolescent. Changes are made related to support for finegrained timestamps. The [EOPNOTSUPP] error is removed. POSIX.1-2008, Technical Corrigendum 1, XSH/TC1-2008/0354 [326], XSH/TC1-2008/0355 [461], XSH/TC1-2008/0356 [326], XSH/TC1-2008/0357 [324], XSH/TC1-2008/0358 [147,429], XSH/TC1-2008/0359 [277], XSH/TC1-2008/0360 [278], and XSH/TC1-2008/0361 [278] are applied. POSIX.1-2008, Technical Corrigendum 2, XSH/TC2-2008/0195 [873], XSH/TC2-2008/0196 [591], XSH/TC2-2008/0197 [817], XSH/TC2-2008/0198 [822], and XSH/TC2-2008/0199 [817] are applied. return to top of pagereturn to top of page
https://pubs.opengroup.org/onlinepubs/9699919799/functions/link.html
CC-MAIN-2019-47
refinedweb
203
78.55
It rained (again) in Georgia, so my wife left me alone in my lab to play yesterday XD I have been playing around with an Atmega1284P-PU on one of CrossRoad’s boards; for some reason, I decided to try the FFT algorithm. Searching the Forum and Google was interesting (not in an overly positive way), so when I finally got something to work based on an old forum post: I decided to clean-up the code a bit, add some small comments, and double-check the performance with my Rigol oscilloscope. The results of that is in this post. The last entry by Magician back in January of '11 was Unfortunately, there is a flow in the code, no FFT performed if you copy/paste as it is. :-[ My goal for this post was to fix this cut & paste issue. I have checked the following test code on UNO and Bobuino_1284 and it works fine using Arduino 1.0.5. To avoid having to set up the .h and the .cpp files as libraries, just put them into your Arduino sketch folder with the .INO file. I do this to allow me to make minor changes (ONLY IF Necessary) before migrating same to the standard \Arduino\Libraries folder. It is just a preference of mine and certainly is not necessary, but if you use the traditional approach, be sure to change #include "fix_fft.h" to #include <fix_fft.h> so that the GUI can find the files at compile time. Based on my testing, (I do not use bin #0, rather using the display space for the “L” and “R” indicating the Left and Right channels) bin#1 is tuned to 85Hz and the following 14 bins are: 120, 190, 240, 305, 365, 430, 480, 550, 625, 660, 735, 780, 850, and 910Hz. You will note that it is not exactly linear because we are not using an interrupt-driven paradigm for accurate timing; that is, the Analog inputs are just read in a loop. Accurate timing examples are given in some of the music note implementations of FFT use. Audio signal is approximately 1V P-P from the audio generator, fed through a 0.5uF (0.5MFD value not critical) to the junction of two (2) 10K resistors with the other ends of the resistors going to +5V and Gnd. This voltage-divider junction establishes the 2.5V balance to the AD port. Note, you must use a capacitor and two resistors for each of the two analog inputs. In my breadboard design, I used a 680 Ohm resistor between the voltage divider junction and the input to the uC analog port as a kind of safety to limit any mishap to around 7mA of current BUT this is not necessary in the final layout unless you just have a bunch of 680-1000 Ohm resistors laying around (I do.) Main code: /* FFT_TEST4 Ray Burnette 20130810 function clean-up & 1284 port (328 verified) Uses 2x16 Parallel LCD in 4-bit mode, see LiquidCrystal lib call for details Modified by varind in 2013: this code is public domain, enjoy! 328P = Binary sketch size: 5,708 bytes (of a 32,256 byte maximum) 1284P= Binary sketch size: 5,792 bytes (of a 130,048 byte maximum) Free RAM = 15456 Binary sketch size: 8,088 bytes (of a 130,048 byte maximum) (Debug) */ #include <LiquidCrystal.h> #include "fix_fft.h" // fix_fft.ccp & fix_fft.h in same directory as sketch #define DEBUG 0 #define LCHAN 1 #define RCHAN 0 const int Yres = 8; const int gain = 3; float peaks[64]; char im[64], data[64]; char Rim[64], Rdata[64]; char data_avgs[64]; int debugLoop; //LiquidCrystal(rs, enable, d4, d5, d6, d7) 1284P Physical: 6, 5, 4, 3, 2, 1 LiquidCrystal lcd(11, 10, 7, 6, 5, 4); // saves all analog pins port PA // Custom CHARACTERS byte v1[8] = { B00000,B00000,B00000,B00000,B00000,B00000,B00000,B11111}; byte v2[8] = { B00000,B00000,B00000,B00000,B00000,B00000,B11111,B11111}; byte v3[8] = { B00000,B00000,B00000,B00000,B00000,B11111,B11111,B11111}; byte v4[8] = { B00000,B00000,B00000,B00000,B11111,B11111,B11111,B11111}; byte v5[8] = { B00000,B00000,B00000,B11111,B11111,B11111,B11111,B11111}; byte v6[8] = { B00000,B00000,B11111,B11111,B11111,B11111,B11111,B11111}; byte v7[8] = { B00000,B11111,B11111,B11111,B11111,B11111,B11111,B11111}; byte v8[8] = { B11111,B11111,B11111,B11111,B11111,B11111,B11111,B11111}; void setup() { if (DEBUG) { Serial.begin(9600); // hardware serial Serial.print("Debug ON"); Serial.println(""); } lcd.begin(16, 2); lcd.clear(); lcd.createChar(1, v1); lcd.createChar(2, v2); lcd.createChar(3, v3); lcd.createChar(4, v4); lcd.createChar(5, v5); lcd.createChar(6, v6); lcd.createChar(7, v7); lcd.createChar(8, v8); } void loop() { for (int i = 0; i < 64; i++) { // 64 bins = 32 bins of usable spectrum data data[i] = ((analogRead(LCHAN) / 4 ) - 128); // chose how to interpret the data from analog in im[i] = 0; // imaginary component Rdata[i] = ((analogRead(RCHAN) / 4 ) - 128); // chose how to interpret the data from analog in Rim[i] = 0; // imaginary component } fix_fft(data, im, 6, 0); // Send Left channel normalized analog values through fft fix_fft(Rdata, Rim, 6, 0); // Send Right channel normalized analog values through fft // At this stage, we have two arrays of [0-31] frequency bins deep [32-63] duplicate // calculate the absolute values of bins in the array - only want positive values for (int i = 0; i < 32; i++) { data[i] = sqrt(data[i] * data[i] + im[i] * im[i]); Rdata[i] = sqrt(Rdata[i] * Rdata[i] + Rim[i] * Rim[i]); // COPY the Right low-band (0-15) into the Left high-band (16-31) for display ease if (i < 16) { data_avgs[i] = data[i]; } else { data_avgs[i] = Rdata[i - 16]; } // Remap values to physical display constraints... that is, 8 display custom character indexes + "_" data_avgs[i] = constrain(data_avgs[i], 0, 9 - gain); //data samples * range (0-9) = 9 data_avgs[i] = map(data_avgs[i], 0, 9 - gain, 0, Yres); // remap averaged values } Two16_LCD(); decay(1); } void Two16_LCD(){ lcd.setCursor(0, 0); lcd.print("L"); // Channel ID replaces bin #0 due to hum & noise lcd.setCursor(0, 1); lcd.print("R"); // ditto for (int x = 1; x < 16; x++) { // init 0 to show lowest band overloaded with hum int y = x + 16; // second display line if (data_avgs[x] > peaks[x]) peaks[x] = data_avgs[x]; if (data_avgs[y] > peaks[y]) peaks[y] = data_avgs[y]; lcd.setCursor(x, 0); // draw first (top) row Left if (peaks[x] == 0) { lcd.print("_"); // less LCD artifacts than " " } else { lcd.write(peaks[x]); } lcd.setCursor(x, 1); // draw second (bottom) row Right if (peaks[y] == 0){ lcd.print("_"); } else { lcd.write(peaks[y]); } } debugLoop++; if (DEBUG && (debugLoop > 99)) { Serial.print( "Free RAM = " ); Serial.println( freeRam(), DEC); Serial.println( millis(), DEC); debugLoop = 0; } } int freeRam () { extern int __heap_start, *__brkval; int v; return (int) &v - (__brkval == 0 ? (int) &__heap_start : (int) __brkval); } void decay(int decayrate){ int DecayTest = 1; // reduce the values of the last peaks by 1 if (DecayTest == decayrate){ for (int x = 0; x < 32; x++) { peaks[x] = peaks[x] - 1; // subtract 1 from each column peaks DecayTest = 0; } } DecayTest++; } Continued in next post... Ray
https://forum.arduino.cc/t/8-bit-fft-revisited-fix_fft/177790
CC-MAIN-2021-25
refinedweb
1,192
68.7
Jan 22, 2012 09:01 PM|GBotros|LINK can we use jquery ui with twitter-bootstrap ? or it will lead to css conflicts All-Star 36874 Points Jan 23, 2012 12:32 AM|bruce (sqlwork.com)|LINK jquery-ui is pretty good with css namespaces, but why would you use both. bootstaps main feature is being less based, and using css solutions, while jquery ui is javascript based. i mean if you were using less, how could you stand the using the jquery ui css? All-Star 36874 Points Jan 23, 2012 06:06 PM|bruce (sqlwork.com)|LINK as I said jquery-ui has pretty good name spacing for the css, but the libraries overlap in method names, button, tab, dialog. also there is so much overlap in css, button style, alert style, dialog style, tabs style, message style, etc. I don't see why you would use both. the accordian is trival, but there are several other datapickers you could use with bootstap. I keep hoping that someday, jquery-ui will have less support for their themes. note: while I've used and like jquery-ui, I don't think it has moved with the times very well. all the teams real effort has gone to jquery-mobile and jquery-ui has suffered. for example everyone (even jquery-mobile) has gone to html 5 attributes based selectors, but jquery-ui is still pseudo class based. other libraries are additive based (css first, javascript second), but jquery-ui is still javascript based. Jan 23, 2012 09:06 PM|GBotros|LINK and this helped me too (take care this approach does not seem to work with dialogs and datepicker) read comments carfully 4 replies Last post Jan 23, 2012 09:06 PM by GBotros
http://forums.asp.net/t/1761495.aspx?can+we+use+jquery+ui+with+twitter+bootstrap+
CC-MAIN-2015-14
refinedweb
294
80.11
As I mentioned in a previous blog, there are some pretty creative (and destructive) things people can do with your code if you’re not careful. Just as a kitchen knife can be used to cut cheese or to kill someone, so your code can be used to increase productivity or wreak digital havoc. It’s not all bad news though. There are several things you can do to help mitigate attacks based on code repurposing, although unfortunately there is no panacea and many of these techniques will not be applicable to all given scenarios. A much shorter article that mentions some key mitigating factors can be found in the VSTO documentation online. But if you’ve got a spare chunk of time, by all means read on. Thanks to Siew Moi, Mike Howard, and the Office security folks for giving this a once-over before posting Root Cause The root cause of repurposing comes from trusting input data. Michael Howard has a lot to say about this in Chapter 10 of Writing Secure Code 2nd Edition (aptly titled All Input is Evil), but in repurposing attacks it’s not just your code that is trusting potentially bad data, it’s the end-user, too! With a typical web-based application, checking your inputs isn’t so hard since you mostly need to worry about things like form fields, query strings, and HTTP headers. (You may also need to worry about any external systems you connect to, such as a database, if that figures into your threat model). In most cases, you can assume that the server your application is being hosted on is trustworthy, along with any other data on it such as configuration files, web page content, and so on. The problem with Office-based development is that your code is hosted inside a very dynamic, very mobile container (the document), and in general you cannot assume that any part of that document’s content is trustworthy. Not the warning text you put in BIG BOLD WARNING TEXT at the top of the document. Not the names or positions of the controls on the document. Not the hidden worksheets where you store database connection strings. Not the configuration settings you put inside a hidden region with white text on a white background in 1 point WingDings font. Nothing. So our goal is to do one or more — always think of “defence in depth” –Â of the following things: - Eliminate our dependence on information inside the document - Ensure that the information in the document is trustworthy - Check that the information inside the document falls within a “reasonable” range - Provide cues to the user about the purpose of the code they are running - Elicit explicit user consent before performing potentially harmful actions - Probably some more things here! So let’s look at these in turn. Eliminate dependence on untrustworthy information In my previous blog, I used the example of a “Format my Drive” document that contained instructions for formatting your hard drive, along with Yes and No buttons that invoked the appropriate code. In this sample, the code doesn’t make any decisions based on inputs (it has none), but the user does. The code assumes that the user will always be presented with adequate contextual information (eg, “clicking this button will format your drive!”), but this is not the case. The user has no idea that there is no link between the text surrounding the buttons and the actions the buttons take; they think if the text says “Download unlimited free MP3s now!” then that’s what the button will do. Oops! So even in this case, the code implicitly trusts its “inputs” (the user invoking the code) and it should not be. Something as dangerous as a “Format my Drive” control should provide additional feedback to the user about the actions they are about to perform, as noted in one of the next sections of this It’s also a good idea to think about the things you can depend on. Resources on the local machine, such as files or registry keys, should in general be trustworthy (see note below!). So should other servers that your code communicates with, as long as they are under your control. And of course you can trust your own code not to have been tampered with, as long as it is signed or living in a secure location. NOTE: I said above that you can trust the local machine. This assumes we are trying to mitigate against repurposing attacks where one user sends a malformed document to another user in an attempt to get them to do something harmful to themselves. You cannot trust resources on the client computer in other scenarios, such as when you are building a server-side solution, because then a hostile client could be used to subvert your security system. Servers need to protect themselves from malicious clients, and clients need to protect themselves from malicious servers. Different scenarios call for different threat models and different mitigation strategies. So if your solution needs to persist data such as user preferences or other snippets of information that you come to rely on, you can write it to the registry or to a config file (not in %ProgramFiles%, but in %UserProfile%) or use some other mechanism to persist it. Just don’t persist it in the document, because then you’ll never be able to read it out again without worrying about the contents. Contacting servers is a bit trickier. You might want to contact, say, a trusted web service on to download information, but if you hard-code that URL into your solution then you will have trouble when you need to move the service to another machine. The obvious answer is to stick the URL of the server inside the document… but you can’t do that because it’s not trustworthy! This is where you could do something like an Active Directory lookup to figure out which server to contact. I’ve been told it’s possible, but I’m not an AD expert so you’ll have to figure that one out on your own ;-). Ensure the information is trustworthy Let’s say you absolutely have to rely on the information inside the document, for whatever reason. Now you have to make sure the content in the document is as trustworthy as your code itself, which means severely limiting what can be done with the document. Two properties of code that makes it “easy” to secure in the CLR are that: i) You can sign the code, which can be used to detect any modifications made to it ii) You can keep it in a fixed location, which can be used to ensure nobody can modify / overwrite it Unfortunately, neither of these two properties apply to documents in the general sense. The whole point of having document-based solutions is so that you can modify them (thereby making signatures pretty useless) and that you can send them around to people, copy them to your hard drive, and so on (thereby making location pretty useless). Nevertheless, depending on the scenario you may be able to do one of these things in your solutions. in Office 2003 is pretty easy — go to Tools -> Options -> Security and click on the Digital Signatures button, where you can select one of your certificates and add your signature to the document. Once the document is signed, no-one can tamper with it in any way without breaking the signature, but as I mentioned in my Old Fashioned Security blog, there’s a big problem with signing content: - Unless the recipient expects the document to be signed (and knows how to verify it), it doesn’t really help you. An attacker will just remove the signature altogether and go on their wicked way So either you have to educate all your users about how to inspect digital signatures, or you can do something about it yourself. In Word, you can use the object model inside your code to check if the active document contains a signature, and if so if it is from the right person (ie, you) and if it is valid. Inside your startup code, you can do something similar to the following to ensure that your code is running inside a genuine document that you created (if Rob had finished the WordML to HTML transform by now, this would be in glorious Technicolor, but alas he hasn’t so it’s not): Private Sub ThisDocument_Open() Handles ThisDocument.Open Dim signature As Office.Signature Dim foundSignature As Boolean = False If (ThisDocument.Signatures.Count < 1) Then MessageBox.Show(“This document is not signed.”) ‘ Take appropriate action here… ThisDocument.Close() End If For Each signature In ThisDocument.Signatures ‘ Bail out early on invalid signatures If ((signature.IsValid = False) Or _ (signature.IsCertificateRevoked = True)) Then MessageBox.Show(“This document’s signature is bad.”) ThisDocument.Close() End If ‘ Add the appropriate strings here If ((signature.Signer = “ACME Corp”) And _ (signature.Issuer = “Verisign”)) Then ‘ You could also check the sign date if you want MessageBox.Show(“This document is signed correctly.”) foundSignature = True Exit For End If If (foundSignature = False) Then MessageBox.Show(“This document’s signature is missing.”) ‘ Take appropriate action here… ThisDocument.Close() End If End Sub There’s a bit of a chicken-and-egg problem here for both VBA and VSTO developers here, but for different reasons. With VBA, because the code is included in the document’s signature you have to sign the document after you have written and tested your code. This could require you to build, test, and re-sign your document many times, or it could require you to have some kind of mode where the document ignores invalid signatures while in development (remember, this cannot be a flag stored in the document itself, but it could be a registry key you use to disable signature checking on your dev box). With VSTO, the assembly is built and signed independently from the document, but since the custom properties which link to the assembly form part of the signature, it is not possible to move the assembly after the document is signed, so that must be done as the final step before officially releasing the document. This “problem” of having the code (or the link to the code) be part of the signature actually helps us prevent some attacks as well. Imagine, as outlined in my previous blog, that you have a Budget document and an HR document. Both documents are signed, and both assemblies are signed, but by some horribly bad coincidence, it’s possible to point the HR document at the Budget code (or vice-versa) and do some damage to the document recipient. This will not work, since swapping out the code (or the link to the code) will break the signature. The only real attack possible here is if you can take over the URL at which the linked code lives, in which case you could replace the HR assembly with the Budget assembly. So you need to make sure your servers are protected from having unauthorised people uploading content! One potential problem here is that the Signature object in Word only tells you the name of the signer and the authority that issues that certificate, but not the rest of the trust chain up to the root authority. It is possible (but unlikely) to have two signatures, both created by “Bob Smith” and issued by “ACME Corp Authority”, but for them to be two different Bob Smiths issued from two different ACME Corp Authorities. (Thankfully you still have to trust the root authority, so it’s not as if any old hacker could create such a hierarchy in order to fool your code, but it’s something to keep in mind). Siew Moi has written a great article on Word signatures, and it includes a download that has some more code to play with. Unfortunately, there is no programmatic support for checking digital signatures in Excel, so you cannot use this technique. Also, you may not have the money to buy a certificate from Verisign or Thawte or one of the other big vendors, and you may not have your own PKI servers with which to issue your own certificates. In this case, you can “tie” your document to a specific location, such as a trusted (and read-only) share on a server, or to a specific “install” location on the user’s machine. (This is similar to how the SiteLock feature works for ActiveX controls). At startup, you can check that the document’s location is where it is supposed to be, and take appropriate actions (such as closing the document) if the document comes from a suspicious location. This has the problem of baking URLs or other locations into your code, which means it’s impossible to ever move the solution if you need to, so you may want to use a Registry key or Active Directory property or some other external (but still trusted) mechanism to figure out if the document is hosted in a secure location: Private Sub ThisDocument_Open() Handles ThisDocument.Open If (ThisDocument.FullName <> _ “\appserversecuremyapp.doc”) Then MessageBox.Show(“This document is not secure”) ThisDocument.Close() End If End Sub Obviously if you require your document to be signed, that means the user can never actually modify it and then save it. And if you require it to be in a specific location, that means the user can’t put it on their desktop or mail it to a friend. That’s kind of a problem for most documents, since they are designed to be modified and saved and moved around. (I show a slightly less draconian version of this in the next section). One way around this is to have your “document” act like a mini application hosted inside Word or Excel, and have it load and save its own “documents”. This is made super-easy by the XML support in Word and Excel 2003, where you can import or export the XML content of a document to a separate file without actually saving all the markup and other stuff that makes up your “application”. As long as your solution lends itself well to having only the data saved and loaded into your static template, you should be good to go — just provide your own “Open” and “Save” mechanisms inside your document that load and save XML files that are pumped right into the mapped XML structure of your document. Here’s some sample code for doing it in Excel — Word is harder since you can’t automatically import XML into a Word document; you need to load the DOM yourself and insert text into each node of the document. For this simple solution, you can create a spreadsheet with pretty formatting, etc. and then do the following: 1) Map an XML Schema into the document (a really basic one follows if you need it) 2) Add a single Command Button from the Controls Toolbox toolbar. Open the Properties window for it and give it a name of cmdLoad. Delete the Caption so it is blank. 3) (Optional) Digitally sign the document with a certificate to prove that you can do this without modifying the content Here’s a simple schema you can map to a spreadsheet. Save it with an extension of .XSD <?xml version=”1.0″ encoding=”utf-8″ ?> <xs:schema id=”simple” targetNamespace=”” elementFormDefault=”qualified” xmlns=”” xmlns:mstns=”” xmlns:xs=””> <xs:element name=”SimpleTest”> <xs:complexType> <xs:sequence> <xs:element name=”FirstName” type=”xs:string” /> <xs:element name=”LastName” type=”xs:string” /> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> And here’s the code you can use – just dump it into the default codespit of a VSTO solution, nuking the original ThisWorkbook_Open handler: Private WithEvents cmdLoad As MSForms.CommandButton ‘ Called when the workbook is opened. Private Sub ThisWorkbook_Open() Handles ThisWorkbook.Open InitialiseUI() End Sub Private Sub InitialiseUI() cmdLoad = FindControl(“cmdLoad”) If (cmdLoad Is Nothing) Then MsgBox(“Document is corrupt!”) End If cmdLoad.Caption = “Load XML” cmdLoad.AutoSize = True End Sub Private Sub cmdLoad_Click() Handles cmdLoad.Click LoadXml() End Sub Private Sub LoadXml() ‘ Gather the filename through a custom dialog, etc. Dim filename As String = “C:tempexport.xml” ThisWorkbook.XmlMaps(1).Import(filename) ‘ Don’t prompt user to save ThisWorkbook.Saved = True End Sub Private Sub SaveXml() ‘ Gather the filename through a custom dialog, etc. Dim filename As String = “C:tempexport.xml” ThisWorkbook.XmlMaps(1).Export(filename, True) ‘ Don’t prompt user to save ThisWorkbook.Saved = True End Sub ‘ Handle the Save event to just save the data Private Sub ThisWorkbook_BeforeSave( _ ByVal SaveAsUI As Boolean, ByRef Cancel As Boolean) _ Handles ThisWorkbook.BeforeSave SaveXml() Cancel = False End Sub Another way to ensure that the information in the document is trustworthy without resorting to signatures or site-locking your document is to ensure that you create it all yourself, from within your code, and that you make it easily understandable by the user. For example, if you have a button on a Word document that will format the hard drive when it is clicked, don’t just give the button a caption of Yes and rely on the surrounding text to explain what it does. A more descriptive caption, such as Format Drive And don’t just set the Caption of the button using the property grid in Word or Excel, since that value can be changed by the attacker and is not stored as part of your VBA or .NET assembly. Even if your code is signed, the attacker can change that property and not invalidate the signature (it will invalidate the signature of the document, if it has one, but it won’t invalidate the signature of the code). Instead, you should explicitly set the caption of your button during your initialisation code, so that it always says what you want it to say. This is what I did above in the sample Excel code. Don’t rely on automatically generating warning text into the document though (eg, injecting “This will format your drive!” in big red letters at the start of the document), since no matter what you do the bad guys will probably figure out a way of obscuring it (hiding the region, placing a white bitmap over the top of the text, etc.). If you need more UI than you can get by putting captions on your buttons, then you need to implement some kind of form (see below) or <gasp> build an ActiveX control where you can display all your information in a reliable manner. If you don’t have to have your control hosted directly on the document surface, a SmartDocument solution offers a really good alternative because the attacker has no way to influence what appears in the Task Pane. You can be sure that any warning text or other UI you place in the Task Pane will be there when the user runs the solution. Another way to avoid “untrustworthy” callers in VBA is to make your methods Private (VSTO does not have this problem since there is no way through the Office UI to invoke managed code). If you are placing controls on the document surface, make sure you use ActiveX controls (from the “Control Toolbox” toolbar), not Forms controls (from the “Forms” toolbar). Adding an event handler to an ActiveX control creates a private method that cannot be invoked by any means other than firing the event on the named control, whereas the macros you assign to Forms controls are just public subroutines that can be hooked up to any old event source. Also, since Forms controls aren’t programmable, you can’t programmatically change their content (as described above). If you are building CommandBars or menu items, don’t use the Office UI to hookup event handlers, since again they must be public and hence repurposable (swap the “Save” event handler for the “Delete” one…). Instead you should build up the CommandBar programmatically (so you can set the text and images on each control as outlined above) and declare all your controls inside your solution WithEvents so you can handle the events with a private handler, not a public sub. There is some sample code for VSTO in this article, and the sample solutions that ship with the product also show how to create menus and toolbars. Some sample VBA code is below: Private WithEvents cmdSave As CommandBarButton Private WithEvents cmdDelete As CommandBarButton Private Sub Document_Open() CreateCommandBar End Sub Private Sub CreateCommandBar() Dim bar As CommandBar Dim i As Integer Dim oldContext As Object On Error GoTo Handler ‘ Delete any existing instances of the bar ‘ Must loop backwards since we’re deleting items For i = CommandBars.Count To 1 Step -1 Set bar = CommandBars(i) If ((bar.BuiltIn = False) And _ (bar.Name = “My Bar”)) Then bar. End If ‘ Create our own temporary bar Set bar = CommandBars.Add(“My Bar”, , , True) bar.Visible = True ‘ Add new buttons Set cmdSave = bar.Controls.Add(msoControlButton) cmdSave.Caption = “Save” cmdSave.Style = msoButtonCaption Set cmdDelete = bar.Controls.Add(msoControlButton) cmdDelete.Caption = “Delete” cmdDelete.Style = msoButtonCaption ‘ Exit the sub Return Handler: MsgBox “CommandBar not created:” & _ vbNewLine & _ Err.Description, _ vbOKOnly Or vbInformation, “Error” End Sub Private Sub btw, forgot to mention besides really liking this blog entry — think it’s extremely well written; very useful guidelines and great code repurposing mitigation steps… i also like the shortcut key tips very much. Do please keep them coming. Thanks a lot Peter for all the great blogs and sharing your vast and in-depth knowledge on security, coding, etc… Really appreciate it.
https://blogs.msdn.microsoft.com/ptorr/2003/10/21/mitigating-code-repurposing-attacks/
CC-MAIN-2016-22
refinedweb
3,627
56.79
Runtime compiler APIsRuntime compiler APIs ⚠️ The runtime compiler API is unstable (and requires the --unstableflag to be used to enable it). The runtime compiler API allows access to the internals of Deno to be able to type check, transpile and bundle JavaScript and TypeScript. As of Deno 1.7, several disparate APIs we consolidated into a single API, Deno.emit(). Deno.emit()Deno.emit() The API is defined in the Deno namespace as: function emit( rootSpecifier: string | URL, options?: EmitOptions, ): Promise<EmitResult>; The emit options are defined in the Deno namespace as: interface EmitOptions { /** Indicate that the source code should be emitted to a single file * JavaScript bundle that is a single ES module (`"module"`) or a single * file self contained script we executes in an immediately invoked function * when loaded (`"classic"`). */ bundle?: "module" | "classic"; /** If `true` then the sources will be typed checked, returning any * diagnostic errors in the result. If `false` type checking will be * skipped. Defaults to `true`. * * *Note* by default, only TypeScript will be type checked, just like on * the command line. Use the `compilerOptions` options of `checkJs` to * enable type checking of JavaScript. */ check?: boolean; /** A set of options that are aligned to TypeScript compiler options that * are supported by Deno. */ compilerOptions?: CompilerOptions; /** An [import-map]( * which will be applied to the imports. */ importMap?: ImportMap; /** An absolute path to an [import-map]( * Required to be specified if an `importMap` is specified to be able to * determine resolution of relative paths. If a `importMap` is not * specified, then it will assumed the file path points to an import map on * disk and will be attempted to be loaded based on current runtime * permissions. */ importMapPath?: string; /** A record of sources to use when doing the emit. If provided, Deno will * use these sources instead of trying to resolve the modules externally. */ sources?: Record<string, string>; } The emit result is defined in the Deno namespace as: interface EmitResult { /** Diagnostic messages returned from the type checker (`tsc`). */ diagnostics: Diagnostic[]; /** Any emitted files. If bundled, then the JavaScript will have the * key of `deno:///bundle.js` with an optional map (based on * `compilerOptions`) in `deno:///bundle.js.map`. */ files: Record<string, string>; /** An optional array of any compiler options that were ignored by Deno. */ ignoredOptions?: string[]; /** An array of internal statistics related to the emit, for diagnostic * purposes. */ stats: Array<[string, number]>; } The API is designed to support several use cases, which are described in the sections below. Using external sourcesUsing external sources Using external sources, both local and remote, Deno.emit() can behave like deno cache does on the command line, resolving those external dependencies, type checking those dependencies, and providing an emitted output. By default, Deno.emit() will utilise external resources. The rootSpecifier supplied as the first argument will determine what module will be used as the root. The root module is similar to what you would provide on the command line. For example if you did: > deno run mod.ts You could do something similar with Deno.emit(): try { const { files } = await Deno.emit("mod.ts"); for (const [fileName, text] of Object.entries(files)) { console.log(`emitted ${fileName} with a length of ${text.length}`); } } catch (e) { // something went wrong, inspect `e` to determine } Deno.emit() will use the same on disk cache for remote modules that the standard CLI does, and it inherits the permissions and cache options of the process that executes it. If the rootSpecifier is a relative path, then the current working directory of the Deno process will be used to resolve the specifier. (Not relative to the current module!) The rootSpecifier can be a string file path, a string URL, or a URL. Deno.emit() supports the same protocols for URLs that Deno supports, which are currently file, and data. Providing sourcesProviding sources Instead of resolving modules externally, you can provide Deno.emit() with the sources directly. This is especially useful for a server to be able to provide on demand compiling of code supplied by a user, where the Deno process has collected all the code it wants to emit. The sources are passed in the sources property of the Deno.emit() options argument: const { files } = await Deno.emit("/mod.ts", { sources: { "/mod.ts": `import * as a from "./a.ts";\nconsole.log(a);\n`, "/a.ts": `export const a: Record<string, string> = {};\n`, }, }); When sources are provided, Deno will no longer look externally and will try to resolve all modules from within the map of sources provided, though the module resolution follow the same rules as if the modules were external. For example all module specifiers need their full filename. Also, because there are no media types, if you are providing remote URLs in the sources, the path should end with the appropriate extension, so that Deno can determine how to handle the file. Type checking and emittingType checking and emitting By default, Deno.emit() will type check any TypeScript (and TSX) it encounters, just like on the command line. It will also attempt to transpile JSX, but will leave JavaScript "alone". This behavior can be changed by changing the compiler options. For example if you wanted Deno to type check your JavaScript as well, you could set the checkJs option to true in the compiler options: const { files, diagnostics } = await Deno.emit("./mod.js", { compilerOptions: { checkJs: true, }, }); The Deno.emit() result provides any diagnostic messages about the code supplied. On the command line, any diagnostic messages get logged to stderr and the Deno process terminates, but with Deno.emit() they are returned to the caller. Typically you will want to check if there are any diagnostics and handle them appropriately. You can introspect the diagnostics individually, but there is a handy formatting function available to make it easier to potentially log the diagnostics to the console for the user called Deno.formatDiagnostics(): const { files, diagnostics } = await Deno.emit("./mod.ts"); if (diagnostics.length) { // there is something that impacted the emit console.warn(Deno.formatDiagnostics(diagnostics)); } BundlingBundling Deno.emit() is also capable of providing output similar to deno bundle on the command line. This is enabled by setting the bundle option to "module" or "classic". Currently Deno supports bundling as a single file ES module ( "module") or a single file self contained legacy script ( "classic"). const { files, diagnostics } = await Deno.emit("./mod.ts", { bundle: "module", }); The files of the result will contain a single key named deno:///bundle.js of which the value with be the resulting bundle. ⚠️ Just like with deno bundle, the bundle will not include things like dynamic imports or worker scripts, and those would be expected to be resolved and available when the code is run. Import mapsImport maps Deno.emit() supports import maps as well, just like on the command line. This is a really powerful feature that can be used even more effectively to emit and bundle code. Because of the way import maps work, when using with Deno.emit() you also have to supply an absolute URL for the import map. This allows Deno to resolve any relative URLs specified in the import map. This needs to be supplied even if the import map doesn't contain any relative URLs. The URL does not need to really exist, it is just feed to the API. An example might be that I want to use a bare specifier to load a special version of lodash I am using with my project. I could do the following: const { files } = await Deno.emit("mod.ts", { bundle: "module", importMap: { imports: { "lodash": " }, }, importMapPath: "", }); ⚠️ If you are not bundling your code, the emitted code specifiers do not get rewritten, that means that whatever process will consume the code, Deno or a browser for example, would need to support import maps and have that map available at runtime. Skip type checking/transpiling onlySkip type checking/transpiling only Deno.emit() supports skipping type checking similar to the --no-check flag on the command line. This is accomplished by setting the check property to false: const { files } = await Deno.emit("./mod.ts", { check: false, }); Setting check to false will instruct Deno to not utilise the TypeScript compiler to type check the code and emit it, instead only transpiling the code from within Deno. This can be significantly quicker than doing the full type checking. Compiler optionsCompiler options Deno.emit() supports quite a few compiler options that can impact how code is type checked and emitted. They are similar to the options supported by a configuration file in the compilerOptions section, but there are several options that are not supported. This is because they are either meaningless in Deno or would cause Deno to not be able to work properly. The defaults for Deno.emit() are the same defaults that are on the command line. The options are documented here along with their default values and are built into the Deno types. If you are type checking your code, the compiler options will be type checked for you, but if for some reason you are either dynamically providing the compiler options or are not type checking, then the result of Deno.emit() will provide you with an array of ignoredOptions if there are any. ⚠️ we have only tried to disable/remove options that we know won't work, that does not mean we extensively test all options in all configurations under Deno.emit(). You may find that some behaviors do not match what you can get from tscor are otherwise incompatible. If you do find something that doesn't work, please do feel free to raise an issue.
https://deno.land/[email protected]/typescript/runtime
CC-MAIN-2022-21
refinedweb
1,593
56.35