Vous pouvez lire le billet sur le blog La Minute pour plus d'informations sur les RSS !
Feeds
10426 items (6 unread) in 53 feeds

-
Décryptagéo, l'information géographique (4 unread)
-
Cybergeo
-
Revue Internationale de Géomatique (RIG)
-
SIGMAG & SIGTV.FR - Un autre regard sur la géomatique (1 unread)
-
Mappemonde
-
Imagerie Géospatiale
-
Toute l’actualité des Geoservices de l'IGN
-
arcOrama, un blog sur les SIG, ceux d ESRI en particulier
-
arcOpole - Actualités du Programme
-
Géoclip, le générateur d'observatoires cartographiques
-
Blog GEOCONCEPT FR

-
Géoblogs (GeoRezo.net)
-
Geotribu
-
Les cafés géographiques
-
UrbaLine (le blog d'Aline sur l'urba, la géomatique, et l'habitat)
-
Séries temporelles (CESBIO) (1 unread)
-
Datafoncier, données pour les territoires (Cerema)
-
Cartes et figures du monde
-
SIGEA: actualités des SIG pour l'enseignement agricole
-
Data and GIS tips
-
Neogeo Technologies
-
ReLucBlog
-
L'Atelier de Cartographie
-
My Geomatic
-
archeomatic (le blog d'un archéologue à l’INRAP)
-
Cartographies numériques
-
Veille cartographie
-
Makina Corpus
-
Oslandia
-
Camptocamp
-
Carnet (neo)cartographique
-
Le blog de Geomatys
-
GEOMATIQUE
-
Geomatick
-
CartONG (actualités)
Planet Geospatial - http://planetgs.com
-
0:50
Moving…
sur Planet Geospatial - http://planetgs.comI’m not saying I’ll never post here again, but I think this blog has run its course. Follow me at [jamesfee.org] . Part of this is, what do I do about twitter after some nutty billionaire ruins it and partly about this conversation.
I am excited about the potential return of RSS and blogs.
— Andrew Turner (@ajturner@nullisland.social) (@ajturner) April 25, 2022
The worst case is everyone moves to “newsletters” – dark archives that aren’t durable and discoverableUpdate your RSS: [https:]]
Subscribe via weekly email: [https:]]
Follow on micro.blog: [https:]]
-
22:59
A GIS Degree
sur Planet Geospatial - http://planetgs.comMy son decided to change majors from biodesign to GIS. I had a short moment when I almost told him not to bring all this on himself but then thought differently. I could use my years of experience to help him get the perfect degree in GIS and get a great job and still do what he wants.
He’s one semester into the program so he really hasn’t taken too many classes. There has been the typical Esri, SPSS and Google Maps discussion, but nothing getting into the weeds. Plus he’s taking Geography courses as well so he’s got that going for him. Since he’s at Arizona State University, he’s going through the same program as I did, but it’s a bit different. When I was at ASU, Planning was in the Architectural College. Now it’s tied with Geography in a new School of Geographical Sciences & Urban Planning.
I have to be honest, this is smart, I started my GIS career working for a planning department at a large city. The other thing I noticed is a ton of my professors are still teaching. I mean how awesome is that? I suddenly don’t feel so old anymore.
I’ve stayed out of his classes for the past semester in hopes that he can form his own thoughts on GIS and its applicability. I probably will continue to help him focus on where to spend his electives (more Computer Science and less History of the German Empire 1894-1910). He’s such a smart kid, I know he’s going to do a great job and he was one who spent time in that Esri UC Kids Fair back when I used to go to the User Conference. Now he could be getting paid to use Esri software or whatever tool best accomplishes his goals.
I plan to show him the Safe FME Minecraft Reader/Writer.
-
19:15
GIS and Monitors
sur Planet Geospatial - http://planetgs.comIf there is one constant in my GIS career, it is my interest in the monitor I’m using. Since the days of being happy for a “flat screen” Trinitron monitor to now with curved flat screens, so much has changed. My first GIS Analyst position probably had the worst monitor in the history of monitors. I can’t recall the name but it had a refresh rate that was probably comparable what was seen in the 1960s. It didn’t have great color balance either, so I ended up printing out a color swatch pattern from ArcInfo and taped it on my wall so I could know what color was what.
I stared for years at this monitor. No wonder I need reading glasses now!
Eventually I moved up in the world where I no longer got hand-me-down hardware and I started to get my first new equipment. The company I worked for at the time shifted between Dell and HP for hardware, but generally it was dual 21″ Trinitron CRTs. For those who are too young to remember, they were the size of a small car and put off enough heat and radiation to probably shorten my life by 10 year. Yet, I could finally count on them being color corrected by hardware/software and not feel like I was color blind.
It wasn’t sexy but it had a cool look to it. You could drop it flat to write on it like a table.
Over 11 years ago, I was given a Wacom DTU-2231 to test. You can read more about it on that link but it was quite the monitor. I guess the biggest change between now and then is how little that technology took off. I guess if you asked me right after you read that post in 2010 what we’d be using in 2020, I would have said such technology would be everywhere. Yet we don’t see stylus based monitor much at all.
These days my primary monitor is a LG UltraFine 24″ 4k. I pair it with another 24″ 4K monitor that I’ve had for years. Off to the other side is a generic Dell 24″ monitor my company provided. I find this setup works well for me, gone are the days where I had ArcCatalog and ArcMap open in two different monitors. Alas two of the monitors are devoted to Outlook and WebEx Teams, just a sign of my current work load.
I’ve always felt that GIS people care more about monitors than most. A developer might be more interested in a Spotify plugin for their IDE, but a GIS Analyst care most about the biggest, brightest and crispest monitor they can get their hands on. I don’t always use FME Workbench these days, but when I do, it is full screen on the most beautiful monitor I can have. Seems perfect to me.
-
17:00
Are Conferences Important Anymore?
sur Planet Geospatial - http://planetgs.comHey SOTM is going on, didn’t even know. The last SOTM I went to was in 2013 which was a blast. But I have to be honest, not only did this slip my mind, none of my feeds highlighted it to me. Not only that, apparently Esri is having a conference soon. (wait for me to go ask Google when it is) OK, they are having it next week. I used to be the person who went to as much as I could, either through attending or invited to keynote. The last Esri UC I went to was in 2015, 6 years ago. As I said SOTM was in 2013. FOSS4G, 2011. I had to look up, the last conference that had any GIS in it was the 2018 Barcelona Smart City Expo.
So with the world opening back up, or maybe not given whatever greek letter variant we are dealing with right now, I’ve started to think about what I might want to attend and the subject matter. At the end of the day, I feel like I got more value out of the conversations outside the convention center than inside. So probably where I see a good subset of smart people hanging out. That’s why those old GeoWeb conferences that Ron Lake put on were so amazing. Meeting a ton of smart people and enjoying the conversations, rather than reading Powerpoint slides in a darkly lit room.
Hopefully we can get back to that, just need to keep my eye out.
-
17:57
Unreal and Unity are the new Browsers
sur Planet Geospatial - http://planetgs.comSomeone asked me why I hadn’t commented on Cesium and Unreal getting together. Honestly , no reason. This is big news honestly. HERE, where I work, is teaming up with Unity to bring the Unity SDK and the HERE SDK to automotive applications. I talk about how we used Mapbox Unity SDK at Cityzenith (though I have no clue if they still are). Google and Esri have them too. In fact both Unreal and Unity marketplaces are littered with data sources you can plug in.
HERE Maps with Unity
This is getting at the core of what these two platforms could be. Back in the day we had two browsers, Firefox and Internet Explorer 6. Inside each we had many choices of mapping platforms to use. From Google and Bing to Mapquest and Esri. In the end that competition to make the best API/SDK for a mapping environment drove a ton of innovation. What Google Maps looks like and does in 2021 vs 2005 is amazing.
This brings up the key as to what I see happening here. We’ll see the mapping companies (or companies that have mapping APIs) deliver key updates to these SDK (which today are pretty limited in scope) because they have to stay relevant. Not that web mapping is going away at any point, but true 3D world and true Digital Twins require power that browsers cannot provide even in 2021. So this rush to become the Google Maps of 3D engines is real and will be fun to watch.
Interesting in that Google is an also-ran in the 3D engine space, so there is so much opportunity for the players who have invested and continue to invest in these markets without Google throwing unlimited R&D dollars against it. Of course it only takes on press release to change all that so don’t bet against Google.
-
13:00
Arrays in GeoJSON
sur Planet Geospatial - http://planetgs.comSo my last post was very positive. I figured out how to relate the teams that share a stadium with the stadium itself. This was important because I wanted to eliminate the redundant points that were on top of each other. For those who don’t recall, I have an example in this gist:
Now I mentioned that there were issues displaying this in GIS applications and was promptly told I was doing this incorrectly:
An array of <any data type> is not the same as a JSON object consisting of an array of JSON objects. If it would have been the first, I'd have pointed you (again) to QGIS and this widget trick [https:]] .
— Stefan Keller (@sfkeller) April 4, 2021If you click on that tweet you’ll see basically that you can’t do it the way I want and I have to go back to the way I was doing it before:
Unfortunately, the beat way is to denormalise. Redundant location in many team points.
— Alex Leith (@alexgleith) April 4, 2021I had a conversation with Bill Dollins about it and he sums it up susinctly:
I get it, but “Do it this way because that’s what the software can handle” is an unsatisfying answer.
So I’m stuck, I honestly don’t care if QGIS can read the data, because it can. It just isn’t optimal. What I do care about is an organized dataset in GeoJSON. So my question that I can’t get a definitive answer, “is the array I have above valid GeoJSON code?”. From what I’ve seen, yes. But nobody wants to go on record as saying absolutely. I could say, hell with it I’m moving forward but I don’t want to go down a dead end road.
-
21:42
GeoJSON Ballparks as JSON
sur Planet Geospatial - http://planetgs.comIn a way it is good that Sean Gillies doesn’t follow me anymore. Because I can hear his voice in my head as I was trying to do something really stupid with the project. But Sheldon helps frame what I should be doing with what I was doing:
tables? what the? add , teams:[{name:"the name", otherprop: …}, {name:…}] to each item in the ballparks array and get that relational db BS out of your brain
— Sheldon (@tooshel) April 2, 2021Exactly! What the hell? Why was I trying to do something so stupid when the while point of this project is baseball ballparks in GeoJSON. Here is the problem in a nutshell and how I solved it. First off, let us simply the problem down to just one ballpark. Salt River Fields at Talking Stick is the Spring Training facility for both the Arizona Diamondbacks and the Colorado Rockies. Not only that, but there are Fall League and Rookie League teams playing there. Probably even more that I still haven’t researched. Anyway, GeoJSON Ballparks looks like this today when you just want to see that one stadium.
Let’s just say I backed myself in this corner by starting by only having MLB ballparks, none of which at the time of the project were shared between teams.
It’s a mess right? Overlapping points, so many opportunities to screw up names. So my old school thought was just create a one-to-many relationship between the GeoJSON points and some external table. Madness! Seriously, what was I thinking? Sheldon is right, I should be doing a JSON array for the teams. Look how much nicer it all looks when I do this!
Look how nice that all is? So easy to read and it keeps the focus on the ballparks.
As I said in the earlier blog post.
The problem now is so many teams, especially in spring training, minor leagues and fall ball, share stadiums, that in GeoJSON-Ballparks, you end up with multiple dots on top of each other. No one-to-many relationship that should happen.”
The project had pivoted in a way I hadn’t anticipated back in 2014 and it was a sure a mess to maintain. So now I can focus on fixing the project with the Minor League Baseball realignment that went on this year and get an updated dataset in Github very soon.
One outcome of doing this nested array is that many GIS tools don’t understand how to display the data. Take a look at geojson.io:
geojson.io compresses the array into one big JSON-formatted string. QGIS and Github do this also. It’s an issue that I’m willing to live with. Bill Dollins shared the GeoJSON spec with me to prove the way I’m doing is correct:
3.2. Feature Object A Feature object represents a spatially bounded thing. Every Feature object is a GeoJSON object no matter where it occurs in a GeoJSON text. o A Feature object has a "type" member with the value "Feature". o A Feature object has a member with the name "geometry". The value of the geometry member SHALL be either a Geometry object as defined above or, in the case that the Feature is unlocated, a JSON null value. o A Feature object has a member with the name "properties". The value of the properties member is an object (any JSON object or a JSON null value).
ANY JSON OBJECT! So formatting the files this way is correct and the way it should be done. I’m going to push forward on cleaning up GeoJSON Ballparks and let the GIS tools try and catch up.
-
22:15
GeoJSON Ballparks and MLB Minor League Realignment
sur Planet Geospatial - http://planetgs.com**UPDATE** – See the plan.
Boy, where to start? First, for those who haven’t been following, this happened over the winter.
Major League Baseball announced on Friday (February 12, 2021) a new plan for affiliated baseball, with 120 Minor League clubs officially agreeing to join the new Professional Development League (PDL). A full list of Major League teams and their new affiliates, one for each level of full-season ball, along with a complex league (Gulf Coast and Arizona) team, can be found below.
Minor League Baseball
What does that mean? Well for GeoJSON Ballparks basically every minor league team is having a modification to it. At a minimum, the old minor league names have changed. Take the Pacific Coast League that existed for over 118 years is now part of Triple-A West which couldn’t be a more boring name. All up and down the minor leagues, the names now just reflect the level of minor league the teams are. And some teams have moved from AAA to Single A and all around.
I usually wait until Spring Training is just about over to update the minor league teams but this year it almost makes zero sense. I’ve sort of backed myself into a spatial problem, unintended when I started. Basically, the project initially was just MLB teams and their ballparks. The key to that is that the teams drove the dataset, not the ballparks even though the title of the project clearly said it was. As long as nobody shared a ballpark, this worked out great. The problem now is so many teams, especially in spring training, minor leagues and fall ball, share stadiums, that in GeoJSON-Ballparks, you end up with multiple dots on top of each other. No one-to-many relationship that should happen.
So, I’m going to use this minor league realignment to fix what I should have fixed years ago. There will be two files in this dataset moving forward. One GeoJSON file of the locations of a ballpark and then a CSV (or other format) file containing the teams. Then we’ll just do the old fashioned relate between the two and the world is better again.
I’m going to fork GeoJSON-Ballparks into a new project and right the wrongs I have done against good spatial data management. I’m finally ready to play centerfield!
-
17:46
I’m Here at HERE
sur Planet Geospatial - http://planetgs.comEnclosure: [download]
Last Tuesday I started at HERE Technologies with the Professional Services group in the Americas. I’ve probably used HERE and their legacy companies data and services for most of my career so this is a really cool opportunity to work with a mobile data company.
I’m really excited about working with some of their latest data products including Premier 3D Cities (I can’t escape Digital Twins).
Digital Twins at HERE -
18:36
Digital Twins and Unreal Engine
sur Planet Geospatial - http://planetgs.comI’ve had a ton of experience with Unity and Digital Twins but I have been paying attention to Unreal Engine. I think the open nature of Unity is probably more suited for the current Digital Twin market, but competition is so important for innovation. This project where Unreal Engine was used to create a digital clone of Adelaide is striking but the article just leaves me wanting for so much more.
A huge city environment results in a hefty 3D model. Having strategies in place to ease the load on your workstation is essential. “Twinmotion does not currently support dynamic loading of the level of detail, so in the case of Adelaide, we used high-resolution 3D model tiles over the CBD and merged them together,” says Marre. “We then merged a ring of low-resolution tiles around the CBD and used the lower level of detail tiles the further away we are from the CBD.”
Well, that’s how we did it at Cityzenith. Tiles are the only way to have the detail one needs in these 3D worlds and one that geospatial practitioners are very used to dealing with their slippy maps. The eye-candy that one sees in that Adelaide project is amazing. Of course, scaling one city out is hard enough but doing so across a country or the globe is another. Still, this is an amazing start.
Seeing Epic take Twinmotion and scale it out this way is very exciting because as you can see from that video above, it really does look photorealistic.
But this gets at the core of where Digital Twins have failed. It is so very easy to do the above, crate an amazing looking model of a city, and drape imagery across it. It is a very different beast to actually create a Digital Twin where these buildings are not only linked up to external IoT devices and services but they should import BIM models and generalize as needed. They do so some rudimentary analysis of shadows which is somewhat interesting, but this kind of stuff is so easy to do and there are so many tools to do it that all this effort to create a photorealistic city seems wasted.
I think users would trade photorealistic cities for detailed IoT services integration but I will watch Aerometrex continue to develop this out. Digital Twins are still stuck in sharing videos on Vimeo and YouTube, trying to create some amazing realistic city when all people want is visualization and analysis of IoT data. That said, Aerometrex has done an amazing job building this view.
-
21:01
Moving Towards a Digital Twin Ecosystem
sur Planet Geospatial - http://planetgs.comSmart Cities really start to become valuable when they integrate with Digital Twins. Smart Cities do really well with transportation networks and adjusting when things happen. Take, for example, construction on an important Interstate highway that connects the city core with the suburbs causes backups and a smart city can adjust traffic lights, rail, and other modes of transportation to help adjudicate the problems. This works really well because the transportation system talk to each other and decisions can be made to refocus commutes toward other modes of transportation or other routes. But unfortunately, Digital Twins don’t do a great job talking to Smart Cities.
Photo by Victor Garcia on Unsplash
A few months ago I talked about Digital Twins and messaging. The idea that:
Digital twins require connectivity to work. A digital twin without messaging is just a hollow shell, it might as well be a PDF or a JPG. But connecting all the infrastructure of the real world up to a digital twin replicates the real world in a virtual environment. Networks collect data and store it in databases all over the place, sometimes these are SQL-based such as Postgres or Oracle, and other times they are simple as SQLite or flat-file text files. But data should be treated as messages back and forth between clients.
This was in the context of a Digital Twin talking to services that might not be hardware-based, but the idea stands up for how and why a Digital Twin should be messaging the Smart City at large. Whatever benefits a Digital Twin gains from an ecosystem that collects and analyzes data for decision-making those benefits become multiplied when those systems connect to other Digital Twins. But think outside a group of Digital Twins and the benefit of the Smart City when all these buildings are talking to each other and the city to make better decisions about energy use, transportation, and other shared infrastructure across the city or even the region (where multiple Smart Cities talk to each other).
When all these buildings talk to each other, they can help a city plan, grow and evolve into a clean city.
What we don’t have is a common data environment (CDE) that cities can use. We have seen data sharing on a small scale in developments but not on a city-wide or regional scale. To do this we need to agree on model standards that allow not only Digital Twins to talk to each other (Something open like Bentley’s iTwin.js) and share ontologies. Then we need that Smart City CDE where data is shared, stored, and analyzed at a large scale.
One great outcome of this CDE is all this data can be combined with City ordinances to give tools like Delve from Sidewalk Labs even more data to create their generative design options. Buildings are not a bubble in a city and their impacts on the city extend out beyond the boundaries of the parcel they are built on. That’s what so exciting about this opportunity, manage assets in a Digital Twin on a micro-scale, but share generalized data about those decisions to the city at large which then can share them with other Digital Twins.
And lastly, individual Smart Cities aren’t bubbles either. They have huge impacts on the region or even the country that they are in. If we can figure out how to create a national CDE, one that covers a country as diverse as the United States, we can have something that can even benefit the world at large. Clean cities are the future and thinking about them on a small scale will only result in the gentrification of affluent areas and leave less well areas behind. I don’t want my children to grow up in a world like that and we have the processes in place to ensure that they have a better place than use to grow up in.
-
23:46
Apple’s Digital Twin is All About Augmented Reality
sur Planet Geospatial - http://planetgs.comNow before we get too far, Apple has not created anything close to a Digital Twin as we know them. But what they have done is created an easy way to import your building models into Apple Maps. Apple calls this their Indoor Maps program.
Easily create detailed maps of your indoor spaces and let visitors see where they are right in your app. Organizations with large public and private spaces like airports, shopping centers, arenas, hospitals, universities, and private office buildings can register for the Indoor Maps Program. Indoor maps are built using industry standard tools and require only your existing Wi-Fi network to enable GPS-level location accuracy so visitors can navigate your spaces with ease.
Victoria Airport in the Apple IMDF Sandbox
OK, so clearly this is all about navigation. How do I know where I am in a building and how do I get to a place I need to be. Of course, this is somewhat interesting on your iPhone or iPad in Apple Maps, but clearly, there is more to this than just how do I find the restroom on floor 10 of the bank tower.
To load your buildings in Apple you need to use Mapkit or Mapkit.js and convert your buildings into Indoor Mapping Data Format (IMDF). IMDF is actually a great choice because it is GeoJSON and working toward being an OGC standard (for whatever that is worth these days). I did find it interesting that Apple highlights the following as the use case for IMDF:
- Indoor wayfinding
- Indoor routing
- Temporal constraints
- Connectivity amongst mapped objects
- Location-based services
- Query and find by location functionality
If you’re familiar with GeoJSON, IMDF will look logical to you:
{ "id": "11111111-1111-1111-1111-111111111111", "type": "Feature", "feature_type": "building", "geometry": null, "properties": { "category": "parking", "restriction": "employeesonly", "name": { "en": "Parking Garage 1" }, "alt_name": null, "display_point": { "type": "Point", "coordinates": [1.0, 2.0] }, "address_id": "2222222-2222-2222-2222-222222222222" } }
I encourage you to review the IMDF docs to learn more but we’re talking JSON here so it’s exactly how you’d expect it to work.
Because IMDF buildings are generalized representations of the real-world data, this isn’t actually a Digital Twin. It also means that you need to do some things to your files before converting them to IMDF. Autodesk, Esri, and Safe Software all support IMDF so you should be able to use their tools to handle the conversions. I’ve done the conversion with Safe FME and it works very well and probably the best way to handle this. In fact, Safe has an IMDF validator which does come in handy for sure.
Safe FME support of IMDF
What does make moving your buildings to Apple’s Indoor platform is the new iPhone 12 and iPad Pro LiDAR support. This brings out some really great AR capabilities that become enabled with Apple’s devices. As I said last week, the LiDAR support in the current devices is more about getting experience with LiDAR use cases than actual LiDAR use. This is all about eventual Apple Glass (and Google Glass too) support and the AR navigation that can be done when you have hyper-accurate indoor models in your mapping software.
I’ve been dusting off my MapKit skills because I think not only is this capability useful for many companies but it really isn’t that hard to enable. I am spending some time thinking about how to use the extension capability of IMDF to see how IoT and other services can be brought in. Given the generalized nature of IMDF, it could be a great way to allow visualizing IoT and other services without the features of a building getting in the way. Stay tuned!
-
19:21
COVID-19 is Showing How Smart Cities Protect Citizens
sur Planet Geospatial - http://planetgs.comI feel like there is a before COVID and an after COVID with citizens’ feelings for Smart City technology. Now there is an election tomorrow in the United States that will probably dictate how this all moves forward and after 2016, I’ve learned to not predict anything when it comes to the current president. But, outside that huge elephant in the background, Smart City concepts have been thrust into the spotlight.
Photo by Michael Walter on Unsplash
Most cities have sent their non-essential workers home, so IoT and other feeds to their work dashboards have become critical to their success. The data collection and analysis of the pulse of a city is now so important that traditional field collection tools have become outdated.
Even how cities engage with their citizens has changed. Before COVID, here in Scottsdale, you needed to head to a library to get a library card in person. But since COVID restrictions, the city has allowed library card applications in person which is a huge change. The core structure of city digital infrastructure has to change to manage this new need. Not only engaging citizens deeper with technology but need to ensure those who don’t have access to the internet or even a computer are represented. I’ve seen much better smartphone access on websites over the summer and this will continue.
Even moving from a public space to a digital space for city council meetings has implications. The physicality of citizens before their elected leaders is a check on their power, but being a small zoom box in a monitor of zoom boxes puts citizens in a corner. Much will have to be developed to have a way for those who don’t wish to be in person be represented as well as those who choose to attend meetings in person.
COVID has also broken down barriers to sharing data. The imagined dashboard where Police, Fire, Parks & Rec, City Council, and other stakeholders have come to fruition. The single pane of glass where decision-makers can get together to run the city remotely is only going to improve now that the value has been shown.
Lastly, ignoring the possible election tomorrow, contact tracing, and other methods of monitoring citizens as they go around the city has changed mostly how people feel. Before COVID, the idea that a city could track them even anonymously scared the daylights out of people. But today we are starting to see the value in anonymous tracking so that not only we see who has been in contact with each other but how they interact in a city with social distancing restrictions.
Future planning of cities is changing and accelerated because of COVID. The outcome of this pandemic will result in cities that are more resilient, better managed, planned for social distancing, and are working toward carbon neutral environments. In the despair of this unprecedented pandemic, we see humanity coming together to create a better future for our cities and our planet.
-
21:57
The iPhone 12 Pro LiDAR Scanner is the Gateway to AR, But Not in the Way You Think
sur Planet Geospatial - http://planetgs.comI’m sure everyone knows about it by now, the iPhone 12 Pro has a LiDAR scanner. Apple touts it to help you take better pictures in low light and do some rudimentary AR on the iPhone. But, what this scanner does today isn’t where the power will be tomorrow.
Apple cares a ton about photo quality, so a LiDAR scanner helps immensely with taking these pictures. If there is one reason today to have that scanner, it is for pictures. But the real power of the scanner is for AR. And AR isn’t ready today, no matter how many demos you see in Apple’s event. Holding up an iPhone and seeing how big a couch in your room is interesting, just as interesting as using your phone to find the nearest Starbucks.
Apple has spent a lot of time working on interior spaces in Apple Maps. They’ve also spent a ton of time working on sensors in the phone for positioning inside buildings. This is all building to an AR navigation space inside public buildings and private buildings in which owners share their 3D plans. But what if hundreds of millions of mobile devices could create these 3D worlds automatically as they go about their business helping users find that Starbucks?
The future is so bright though with this scanner. It helps Apple and developers get familiar with what LiDAR can do for AR applications. This is critically important on the hardware side because Apple Glass, no matter how little is known about it, is the future for AR. Same with Google Glass too, the eventual consumer product (ignoring the junk that the first Google Glass was) of these wearable AR devices will change the world, not so much in that you’ll see an arrow as you navigate to the Starbucks, but give you the insight into smart buildings and all the IoT devices that are around.
The inevitable outcome is in the maintenance of smart buildings
Digital Twins are valuable when they link data feeds to a 3D world that can be interrogated. But the real value comes when those 3D worlds can be leveraged using Augmented Reality to give owners, maintenance workers, planners, engineers, and tenants the information they need to service their buildings and improve the quality of building maintenance. The best built LEED building is only as good as the ongoing maintenance put on it.
The iPhone 12 Pro and the iPad Pro that Apple has released this year both have LiDAR to improve their use with photo taking and rudimentary AR, but the experience gained seeing the real-world use of consumer LiDAR in millions of devices will bring great strides to making these Apple/Google Glass devices truly usable in real-world use. I’m still waiting to get my iPhone 12, but my wife’s arrived today. I’m looking forward to seeing what the LiDAR can do.
-
22:59
Google AI Project Recreating Historical Streetscapes in 3D
sur Planet Geospatial - http://planetgs.comWhen this caught my eye I got really interested. Google AI is launching a website titled r? which reconstructs cities from historical maps and photos. You might have seen the underlying tool last month but this productizes it a bit. What I find compelling about this effort is the output is a 3D city that you can navigate and review by going in back in time to see what a particular area looked like in the past.
Of course, Scottsdale, my town, is not worth attempting this on, but older cities that have seen a ton of change will give some great inside into how neighborhoods have changed over the past century.
Street level view of 3D-reconstructed Chelsea, Manhattan
Just take a look at the image above, it really does give the feel of New York back in the ’40s and earlier. People remember how a neighborhood looked, but recreating it in this method gives others key insights into how development has changed how certain areas of cities look and act.
This tool is probably more aimed at history professors and community activists, but as we grow cities into smarter, cleaner places to live, understanding the past is how we can hope to create a better future. I’d love to see these tools be incorporated into smart city planning efforts. The great part of all this is it is crowdsourced, open-sourced, and worth doing. I’m starting to take a deeper dive into the GitHub repository and look how the output of this project can help plan better cities.
-
23:39
Developing a Method to Discover Assets Inside Digital Twins
sur Planet Geospatial - http://planetgs.comOn Monday I had a bit of a tweetstorm to get some thoughts on paper.
Thinking about addressing but inside buildings.
— James Fee (@jamesmfee) October 26, 2020In there I laid out what I thought addressing inside a building should look like. A couple of responses came to the “why” or “this isn’t an issue” but the important thing here is with smart buildings, they need to be able to route people not only to offices for “business” but workers to IoT devices to act upon issues that might occur (like a water valve leaking in a utility closet). Sure one, could just pull out an as-built drawing and navigate, or in the case of visiting a company, the guard at the front door, but if things such as Apple Glass and Google Glass start becoming a real thing, we’ll need a true addressing system to get people where they need to be.
Apple and Google are working this out themselves inside their ecosystems but there needs to be an open standard that people can use inside their applications to share data. I mentioned Placekey as a good starting point with their what@where.
The what is an address – poi encoding and the where is based on Uber’s H3 system. As great as all this is, it doesn’t help us figure out where the leaky valve is in the utility closet. This all is much better than other systems and is a great way to get close. I’ve not seen any way to create extensions to Placekey to do this but we’ll punt the linking problem for now.
The other problem with addressing inside a building is the digital twin might not be in any projection that our maps understand. So we’ll need to create a custom grid to figure out where the IoT and other interesting features are located. But there seems to be a standard being created that solves just this problem, UBID.
UBID builds on the open-source grid reference system and is essentially the north axis-aligned “bounding box” of the building’s footprint represented as a centroid along with four cardinal extents.
I really like this, it might even compete with Placekey, but that’s not my battle, I’m more concerned with buildings in this use case. There is so much to UBID to digest and I encourage you to read the Github to learn more.
But if we can link these grids of buildings, with a Placekey, we have a superb method of navigating to a building POI and then drilling down into navigating to that location using all the great work that companies like Pixel8 are doing. But all that navigation stuff is not my battle, just a location of an IoT sensor in a digital twin that may or may not be in a project we can use.
Working toward that link, a unique grid of a digital twin to a Placekey would solve all problems with figuring out where an asset inside a building is and what is going on at that location. The ontologies to link this could open up whole new methods of interrogation of IoT devices and so much more. e911 and similar systems could greatly benefit from this as well.
-
19:10
Machine Learning Smart City Development from Sidewalk Labs
sur Planet Geospatial - http://planetgs.comThe last time most people heard from Sidewalk Labs was when Toronto didn’t go forward with their Smart City project. There are a ton of reasons why that didn’t happy, but moonshots are what they are and even if you don’t reach the moon, outcomes can be really good for society. Of course, I know not what Sidewalk Labs has been working on but I have to assume Delve exists because of the work they are doing to build smart cities.
Delve at its most simple description is where computers figure out the best design options for commercial or residential project development. there is much more going on here and that’s where the Machine Learning (ML) part comes in and what really catches my eye. I’ve done a tone of work with planning in my years of working with AECs and coming up with multiple design options is time-consuming and very difficult. But with Delve, this can happen quickly and repeatable in minutes.
A quick look at Delve
You get optimal design options based on ranked priority outcomes such as cost or view. Delve takes inputs such as zoning constraints (how high a building can be or what the setbacks are), gross floor area (commercial or residential), and then combines these with the priority outcomes. Then you get scored options that you can look further into and continue to make changes to the inputs.
The immediacy of this is what really sets this apart. When I was at Cityzenith years ago, we attempted to try and get this worked out but the ML tools were not developed enough yet. Clearly though, with Alphabet backing, Sidewalk Labs has created an amazing tool that will probably change how cities are being developed.
I am really excited to see how this works out. I don’t see an API yet so integration outside of Sidewalk labs does not seem to be a priority at this point but the outcome for scaleable planning like this needs to have an API. I’ll be paying attention but seeing ML being used for this type of development is logical, understandable, and workable. We should see great success. You can read more at the Sidewalk Labs blog.
-
19:53
Capturing As-Built Changes to Make Better Digital Twins
sur Planet Geospatial - http://planetgs.comThis post originally appeared on LinkedIn.
Augmented Reality view of Apple Park
Digital Twins are easy. All you have to do is create a 3D object. Some triangles and you’re done. A BIM model is practically a Digital Twin. The problem is usually those twins are created from data that isn’t “as-built“. What you end up with is a digital object that ISN’T a twin. How can you connect your IoT and other assets to a 3D object that isn’t representative of the real world?
I talked a little bit last time on how to programmatically create digital twins from satellite and other imagery. Of course, a good constellation can make these twins very up to date and accurate but it can miss the details needed for a good twin and it sure as heck can’t get inside a building to update any changes there. What we’re looking for here is a feedback loop, from design to construction to digital twin.
There are a lot of companies that can help with this process so I won’t go into detail there, but what is needed is the acknowledgment that investment is needed to make sure those digital twins are updated, not only is the building being delivered but an accurate BIM model that can be used as a digital twin. Construction firms usually don’t get the money to update these BIM models so they are used as a reference at the beginning, but change orders rarely get pushed back to the original BIM models provided by the architects. That said there are many methods that can be used to close this loop.
Construction methods cause changes from the architectural plans
Companies such as Pixel8 that I talked about last week can use high-resolution imagery and drones to create a point cloud that can be used to verify not only changes are being made as specifications but also can notify where deviations have been made from the BIM model. This is big because humans can only see so much on a building, and with a large model, it is virtually impossible for people to detect change. But using machine learning and point clouds, change detection is actually very simple and can highlight where accepted modifications have been made to the architectural drawings or where things have gone wrong.
Focus on getting those changes into the original BIM models helps your digital twinsThe key point here is using ML to discover and update digital twins at scale is critically important, but just as important is the ability to use ML to discover and update digital twins as they are built, rather than something that came from paper space.
Credits:
Photo by Patrick Schneider on Unsplash
Photo by Elmarie van Rooyen on Unsplash
Photo by Scott Blake on Unsplash -
0:48
Scaling Digital Twins
sur Planet Geospatial - http://planetgs.comThis article originally appeared on LinkedIn.
Let’s face it, digital twins make sense and there is no arguing their purpose. At least with the urban landscape though, it is very difficult to scale digital twins beyond a section of a city. At Cityzenith we attempted to overcome this need to have 3D buildings all over the world and used a 3rd party service that basically extruded OSM building footprints where they existed. You see this in the Microsoft Flight Simulator worlds that people have been sharing, it looks pretty good from a distance but up close it becomes clear that building footprints are a horrible way to represent a digital twin of the built environment because they are so inaccurate. Every building is not a rectangle and it becomes impossible to perform any analysis on them because they can be off upwards of 300% on their real-world structure.
Microsoft Flights Simulator created world-wide digital twins at a very rough scale.
How do you scale this problem, creating 3D buildings for EVERYWHERE? Even Google was unable to do this, they tried to get people to create accurate 3D buildings with Sketchup but that failed, and they tossed the product over to Trimble where it has gotten back to its roots in the AEC space. If Google can’t do it who can?
Vricon, who was a JV between Maxar and Saab but recently absorbed by Maxar completely, gives a picture into how this can be done. Being able to identify buildings, extract their shape, drape imagery over them, and then continue to monitor change over the years as additions, renovations, and even rooftop changes are identified. There is no other way I can see that we can have worldwide digital twins other than using satellite imagery.
Vricon is uniquely positioned to create on demand Digital Twins world-wide.
Companies such as Pixel8 also play a part in this. I’ve already talked about how this can be accomplished on my blog; I encourage you to take a quick read on it. The combination of satellite digital twins to cover the world and then using products such as Pixel8 can create that highly detailed ground truth that is needed in urban areas. In the end, you get an up to date, highly accurate 3D model that actually allows detailed analysis of impacts from new buildings or other disruptive changes in cities.
Hyper-accurate point clouds from imagery, hand-held or via drone.
But to scale out a complete digital twin of the world at scale, the only way to accomplish this is through satellite imagery. Maxar and others are already using ML to find buildings and discover if they have changed over time. Coupled with the technology that Vricon brings inside Maxar, I can see them really jump-starting a service of worldwide digital twins. Imagine being able to bring accurate building models into your analysis or products that not only are hyper-accurate compared to extruded footprints but are updated regularly based on the satellite imagery collected.
That sounds like the perfect world, Digital Twins as a Service.
-
0:15
IoT is not About Hardware
sur Planet Geospatial - http://planetgs.comWhen you think about IoT you think about little devices everywhere doing their own thing. From Nest thermostats and Ring doorbells to Honeywell environmental controls and Thales biometrics; you imagine hardware. Sure, there is the “I” part of IoT that conveys some sort of digital aspect, but we imagine the “things” part. But the simple truth of IoT is the hardware is a commodity and the true power in IoT is in the “I” part or the messaging.
IoT messages can inundate us but they are the true power of these sensors
IoT messages are usually [HTTP,] WebSockets, and MQTT or some derivative of them. MQTT is the one that I’m always most interested, but anything works which is what is so great about IoT as a service. MQTT is leveraged greatly by AWS IoT and Azure IoT and both services work so well at messaging that you can use either in replacement of something like RabbitMQ, which my daughter loves because of the rabbit icon. I could write a whole post on MQTT but we’ll leave that for another day.
IoT itself is built upon this messaging. That individual hardware devices have UIDs (unique identifiers) that by their very nature allow them to be unique. The packets of information that are sent back and forth between the device and the host are short messages that require no human interaction.
The best part of this is that you don’t need to hardware for IoT. Everything that you want to interact with should be an IoT message, no matter if it is an email, data query or text message. Looking at IoT as more than just hardware opens connectivity opportunities that had been much harder in the past.
Digital twins require connectivity to work. A digital twin without messaging is just a hollow shell, it might as well be a PDF or a JPG. But connecting all the infrastructure of the real world up to a digital twin replicates the real world in a virtual environment. Networks collect data and store it in databases all over the place, sometimes these are SQL-based such as Postgres or Oracle and other times they are simple as SQLite or flat file text files. But data should be treated as messages back and forth between clients.
All I see is IoT messages
When I look at the world, I see messaging opportunities, how we connect devices between each other. Seeing the world this way allows new ways to bring in data to Digital Twins, think of GIS services being IoT devices, and much easier ways to get more out of your digital investments.
-
23:14
Open Environments and Digital Twins
sur Planet Geospatial - http://planetgs.comThe GIS world has no idea how hard it is to work with data in the digital twin/BIM world. Most GIS formats are open, or at works readable to import into a closed system. But in the digital twin/BIM space, there is too many close data sets that makes it so hard to work with the data. The loops one must go through to import a Revit model are legendary and mostly are how you get your data into IFC without giving up all the intelligence. At Cityzenith, we were able to work with tons of open formats, but dealing with Revit and other closed formats was very difficult to the point it required a team in India to handle the conversions.
All the above is maddening because if there is one thing a digital twin should do, is be able to talk with as many other systems as possible. IoT messages, GIS datasets, APIs galore and good old fashioned CAD systems. That’s why open source data formats are best, those that are understood and can be extended in any way someone needs. One of the biggest formats that we worked with was glTF. It is widely supported these days but it really isn’t a great format for BIM models or other digital twin layers because it is more of a visual format than a data storage model. Think of it similar to a JPEG, great for final products, but you don’t want to work with it for your production data.
IFC, which I mentioned before, is basically a open BIM standard. IFC is actually a great format for BIM, but companies such as Autodesk don’t do a great job supporting it, it becomes more of interchange file, except where governments require it’s use. I also dislike the format because it is unwieldy, but it does a great job of interoperability and is well supported by many platforms.
IFC and GLTF are great, but they harken back to older format structures. They don’t take advantage of modern cloud based systems. I’ve been looking at DTDL (Digital Twins Definition Language) from Microsoft. What I do like about DLDT is that it is based on JSON-LD so many of those IoT services you are already working with take advantage of it. Microsoft’s Digital Twin platform was slow to take off but many companies, including Bentley Systems, are leveraging it to help their customers get a cloud based open platform which is what they all want. Plus you can use services such as Azure Functions (very underrated service IMO) to work with your data once it is in there.
Azure Digital Twins
The magic of digital twins is when you can connect messaging (IoT) services to your digital models. That’s the holy grail, have the real world connected to the digital world. Sadly, most BIM and digital twin systems aren’t open enough and require custom conversion work or custom coding to enable even simple integration with SAP, Salesforce or MAXIMO. That’s why these newer formats, based mostly on JSON, seem to fit the bill and we will see exponential growth in their use.
The post Open Environments and Digital Twins appeared first on Spatially Adjusted.
-
19:39
Studio is the New Pro
sur Planet Geospatial - http://planetgs.comFor quite some time, appending “Pro” after a product was a great way to highlight the new hotness of a product. Believe me, I’m guilty as charged! But the new product name is “Studio”.
- Mapbox Studio
- Google Earth Studio (not to mention Data Studio)
- Azure Data Studio
- ArcGIS AppStudio
- Oracle Developer Studio
- Visual Studio
I mean I could go on, but Studio seems to be the new method of naming authoring tools. I’m not here to make fun of this, just state that I love the name studio used in this sense. Having worked with Architects most of my life, Studio has a great connotation for workspace. All those apps above are used as a workspace to create something else. The concept of a studio, used this way, really helps define what a product is used for. I think the term, hackspace, has taken on a modern connotation for studio but the core concept is just so sound on this part.
A classic studio
Pro or Professional probably harkens back to the early days of software and hardware, where you’d create consumer and professional products. These days, professional is usually used to show a higher end product, vs a professional product (or at least used inconsistently). But appending “pro” to an end of a product name doesn’t signify the same purpose as studio does.
One could almost call ArcMap or QGIS a studio since that is what people are doing with it. My personal studio is my office with this computer I’m writing this blog post on. That thought makes me smile.
-
21:10
Spatial Tau Newsletter
sur Planet Geospatial - http://planetgs.comHappy Friday everyone. These weeks just fly by when you are locked in your house looking out your front window for the Instacart delivery from the grocery store. I just wanted to remind everyone that I’ve got a weekly newsletter were I do some deep dives into things that are on my mind related to GIS, BIM, Smart Cities and technology.
Just sign up below and you’ll get a newsletter in your inbox each Wednesday (or maybe Thursday LOL).
SpatialTau Newsletter
[https:]] -
22:07
Smart Cities and Digital Twins Will Be Built Using Smart Phone Cameras
sur Planet Geospatial - http://planetgs.comI’ve spent years trying to build worldwide building datasets for Smart City and Digital Twin applications. I’ve tried building them using off-the-shelf data providers that give you COLLADA files, I’ve tried using APIs such as the Mapbox Unity SDK and buying buildings one by one to fill in gaps. None of these solutions have the resolution needed to perform the types of analysis needed to make better choices for cities and development potential. How to create real 3D cities with enough resolution has been out of our grasp until now.
I’ve been following Pixel8 for a while now and it is clear that crowdsourcing these models is going to be the only way forward. Over 10 years ago, Microsoft actually had this figured out with their Photosyth tool but they never were able to figure out what to do with it. Only today are we seeing startups attack this problem with a solution that has enough resolution and speed that we can start seeing cities build highly detailed 3D models that have actual value.
Example stolen from Pixel8
It is still early days with these point cloud tools, but at the speed they’ve improved over the last year, we should be seeing their use more and more. Mixing the data from smartphones, lidar and satellite imagery can make large areas of cities mapped in 3D with high accuracy. Pixel8 isn’t the only company attempting this so we should see real innovation over the next year. Stay tuned!
-
23:03
And Then What?
sur Planet Geospatial - http://planetgs.comTechnology is verbose. There are no shortages of superlatives that help define the solution. It becomes almost noise when you are looking at what truly this solution solves, or even has a problem defined. Just drop something in something and then something could happen. I’ve spent a career trying to help fight through this noise and in the end one question should always come up.
And then what?
Yes you can spend millions of dollars on what seems like the perfect application, workflow or cloud-based solution, but after you get it all done, what then? We deal with this all the time on our own, part of why I’ve left digital note taking is because the “And then what?” of putting all that effort into getting text into a smartphone is either non-existent, worthless, or unneeded.
Throwing money at the problem
So much money has been wasted on “solutions” that “revolutionize” the “process”. Being able to answer the question above is how you get the best value out of the proposal. Dead projects, software not being used, databases withering on the server all happen because the users of the tools have no idea what to do with them when they are done. Time for this madness to stop.
-
22:19
Uber and Google Sign 4 Year Agreement on Google Maps
sur Planet Geospatial - http://planetgs.comThis is one of those surprised/not surprised things.
Uber Technologies Inc. announced that it has entered into a Google master agreement under which the ride-hailing company will get access to Google Maps platform rides and deliveries services.
I mean today Uber uses Google Maps with their app, even on iOS. This is basically a continuation of the previous agreement with some changes that better align with how Uber does business. Rather than number of requests that Uber makes for Google Maps services, it is based on billable trips that are booked using Uber, a much more manageable deal for Uber. Last year, it came out that Uber paid Google $58 million over the past 3 years for access to Google Maps. This quote really strikes me as bold:
“We do not believe that an alternative mapping solution exists that can provide the global functionality that we require to offer our platform in all of the markets in which we operate. We do not control all mapping functions employed by our platform or Drivers using our platform, and it is possible that such mapping functions may not be reliable.”
For as much money Uber has invested in mapping, they don’t believe their technology is reliable enough to roll out to the public. That is mapping services in a nutshell, when you business is dependent on the best routing and addressing, those businesses pick Google every time. All that time and effort to build a mapping platform and they still pay another company tens of millions of dollars.
I’ve read so much about how Uber is about ready to release their own mapping platform run on OSM. But in the end the business requires the best mapping platform and routing services and clearly nobody has come close to Google in this regard. Google Maps is not only the standard but almost a requirement anymore.
-
23:20
Abandoning the Digital Notetaking
sur Planet Geospatial - http://planetgs.comA couple months I made one last attempt to enjoy taking notes digitally. I used a combination of Github, Microsoft VS Code and VIM to make my notes shareable and archivable across multiple platforms. As I expected, it failed miserably. It isn’t to say that Github doesn’t do a good job of note taking, just the workflow is wonky because that is what technologists do, make things harder for them because they can.
The thing is though, I find myself taking less notes now than before, because of the workflow. Just because I can do something doesn’t mean I should. Moving back to analog is usually a good choice, how often do I need to search my notes? Rarely, I mostly look at the dates and then go from there.
My workflow has now standardized to using the Studio Neat Totebook which I enjoy because it is thin, has the dot grid that gives note taking flexibility and has archival stickers so I can put them on the shelf like my old Field Notes. Why did I not go back to them? I find their size for normal note taking too constrictive, but that’s just me. The size of the Totebook is just right, small enough to not be to big, but big enough to not be too small.
I still use the same pens I’ve been using for years, I feel like they don’t smear and don’t cost a bundle if you lose them and have a bit of friction on the writing that makes it much easier to control. Pens are more a personal preference, it’s hard to move between them as easily as paper. Find a pen you like and stick with it.
I’m just done with Evernote, Bear, OneNote and all the rest of platforms I’ve spent years trying to adapt to.